Skip to main content

Collective Intelligence Design Challenges

(This paper was originally written as constructive criticism for{' '} the MapsMap challenge .)

We, the members of the Canonical Debate Lab (CDL), are a group of researchers, developers, and system thinkers who have independently been working on collective intelligence systems for several years.

… ‘collective intelligence’ can be understood as the enhanced capacity that is created when people work together, often with the help of technology, to mobilise a wider range of information, ideas and insights. Collective intelligence (CI) emerges when these contributions are combined to become more than the sum of their parts for purposes ranging from learning and innovation to decision-making. – Nesta .

Many projects, most recently{' '} the MapsMap challenge, aim at large-scale open collaboration through crowdsourcing, whether what is being collaborated on are decisions, sensemaking, or (as in the MapsMap case) definitions of problems and solutions. Those efforts overlap with our vision of collective intelligence. Our goal is to aggregate contributions of diverse stakeholders into a unified information space, which can be accessed through multiple views. Views are tailored to a specific use case—e.g. adding problems to a map—and can present differing perspectives and levels of detail.

We greatly appreciate the energy poured in this space. This area of work deserves more attention, more contributions, but above all a better coordination of efforts. We are calling out those efforts to connect with one another, and with us. We are experts on the topic and have built CI systems ourselves. By this writing, we are not (yet) proposing solutions (though you can find references to past and ongoing projects below); but we want to draw attention to challenges that lie ahead, some of which are often overlooked.

Cognitive Capacity

Time and knowledge of people is limited. To contribute to a CI system, we can't expect users to be aware of, or be motivated to read and learn, all relevant existing information. Therefore, information needs to be conveyed differently to people depending on expertise and background. However, this comes at the risk of fragmenting information into sub-communities. While the same information can be conveyed differently to different users, it still needs to be connected so that users that do have the necessary expertise or motivation can easily find and contribute information at the level of detail they are comfortable with.

So far, this assumes there is no disagreement on information contained within the system. However, the amount of relevant information to consider, and the challenge of interpreting it, increases the more diverse the set of perspectives ("multiple truths") included in a system. Moreso, since perspectives that are different from one’s own carry with them unfamiliar concepts, cognitive biases, and competing terminology.

Introducing hierarchies of more abstract to specific representations of the same information may reduce the amount of information that is presented to users at any given time, but such hierarchies can be complex to reason about and often still need to be traversed to be fully understood. In general, any mechanism that tries to reduce cognitive load by presenting less information, such as information hierarchies, takes time for users to learn. Furthermore, it introduces additional sources of disagreement, and thus, competing perspectives to be represented.

Map Diversity and Alignment

The MapsMap challenge lists the ability to define “target outcomes, milestones, and dependencies” as requirements for the stated purposes of coordinating on local problems and incentivizing desirable trajectories through problem spaces. We expect these are but a subset of anticipated and desirable requirements and purposes, operating on the shared information space consisting of all contributions. However, no single view (e.g. map) of an information space can serve all purposes. Each purpose dictates specific requirements on how the view is structured, which may not be compatible with other ways of organizing and displaying information. Accordingly, multiple focused views of a shared information space are needed.

There are other reasons for supporting multiple views of the same information. Reducing cognitive load by introducing hierarchical views was already mentioned in the previous section on “Cognitive Capacity”. Furthermore, it is also unlikely that a single view can ever accommodate all perspectives and remain intelligible. Thus, there is a need to accommodate specialized views tailored to an individual or subcommunity’s unique perspective, filtering the information they care about. This gives a sense of ownership which otherwise is missing in a fully shared information space.

Such a diversity of views, while necessary, can lead to fragmentation of the information space into multiple uncoordinated local views. Most tools we are aware of either focus on personal maps, or hope for the community to self-organize around a common map. Self-organization is relatively straightforward early on in the project, but when a system gains adoption, new people join the conversation, contributions multiply, sometimes altering the meaning of existing information, then maintaining a coherent information space becomes increasingly difficult.

If information is not shared between views, fragmentation could lead to, for example, failing to identify dependencies, synergies, or common goals between projects. Local views should convey how much related information exists in other views, so users are invited to explore broader perspectives. But, another risk is the illusion of synergy through spurious connections: for example, if interdependent projects refer to common goals, goals need to be interpreted similarly. To ensure this, global alignment on key information represented across different local views is needed; i.e. a process whereby everyone agrees on the intended meaning of said information (e.g. common definitions).

Mechanisms that support local, within-group alignment, and assume shared goals and processes (e.g., Git) are not designed to foster a coherent global understanding across projects, teams, or subcommunities.

Context and Provenance

All information contributed to a CI system is inherently contextual, dependent on time, place, and the mindset of the individual expressing it. For contributions to be understood as originally intended by others, it is important to have access to the provenance of the information, so that semantic ambiguities and exact meaning of terms can be resolved using the original context.

However, even when provenance is available, context is never made fully explicit when information is introduced; and neither can it be. Therefore, even in the best-case scenario, conflicts will arise not only due to disagreement on explicitly stated information, but also due to differences in implicit assumptions that are made about context which may not be shared.

This is a major challenge for CI tools, which aim to aggregate knowledge by meaningfully linking information (e.g. problems) to each other. Since any linked information carries with it unstated implicit context, interpreting a single piece of information, or even the reason why it was linked, requires not only interpreting the information currently in focus, but also all information linked to it, and linked to that in turn, ad infinitum. If this makes it difficult for a single individual to interpret information in a CI system, this is even more so when trying to find agreement on how to interpret it in large communities (exacerbating what was stated in the “Cognitive Capacity” and “Map Diversity and Alignment” sections).

In essence, when large communities are involved, adding structure to information may have the paradoxical effect of further obfuscating it, unless mechanisms are introduced to negotiate ambiguity stemming from conflicting interpretations (informed by context or not).

Quality and Trust

High-quality information is needed for a CI system to act, or to be trusted, as the basis for decision-making, e.g. to distribute funding. Achieving high-quality information is an ongoing process; one should not expect that the given state of information in a CI system is final and requires no more additions or amendments. One particular ongoing challenge is maintaining a suitable signal-to-noise ratio. In addition, to guarantee that any mistakes or problems with contained information can be identified, CI systems should both be flexible enough for users to express any concerns they may have, and allow anyone to contribute freely to counteract potential biases.

However, open contributions are at odds with quality control. For example, free contributions open up the system to manipulation attempts by bad-faith actors, and loud voices can bias and demotivate other users. To counteract this, mechanisms for ranking and rating of content, such as (community) moderation or reputation systems, need to be put in place. But, at the risk of losing users' trust in governance of the CI system, these mechanisms need to be transparent, not overly restrictive, and distribution of power needs to be—and be perceived as—fair. Striking the right balance between information quality, ease of contributions, and moderation is a major challenge for CI systems and necessarily involves making tradeoffs.


For many of these problems, we have already worked on and experimented with systems, both in academia and industry, embodying partial solutions. As such, we can provide extensive insight into the challenges that will confront anyone invested in this problem space.

Given the scale of the problem, in particular the diversity of use cases implicit in the original goal, we believe that a proper solution is more likely to take the form of an ecosystem of interconnected tools rather than a single product designed by a single team. To ensure that such an ecosystem works as a coherent whole, there is a need to identify common core requirements, in the form of a shared core data model and protocols, potentially using a shared infrastructure. If we can convince other designers of collective intelligence tools of the merit of this approach, which we are working on in the CDL, we would love you to join us to collaborate on a shared vision of collective intelligence.

Signed by CDL  members and allies: