Tech innovation, humanized

By

Barcelona’s CHI 2026

I am writing this as I spend time reviewing the proceedings from the ACM conference in Human Factors in Computing Systems (CHI) 2026.

What follows is a set of reflections on signals emerging from the conference and the discussions that I am currently involved in as part of my day to day work. It pays to bear in mind that human factors engineering (HFE) and human systems integration (HSI) are no longer an afterthought or optional nice-to-have, but essential for rightsizing AI systems that operate efficiently.


WHY HUMAN FACTORS MATTER MOST

Making technology human is not about an imitation or a thinking game alone, but about purposely aligning AI with how we, the users, behave as we sense, perceive, judge, decide, act and learn. This also speaks to what it means to be human along psychological, physiological and sociological dimensions, and, in turn, what it takes to humanize technology operating in our best interest.

Human-Centered AI (HCAI) is becoming instrumental to system quality and long-term success, both because it de-risks adoption and investment, and because it lays the groundwork for how systems meet human scale.

The conversation is moving to what I call Human Scale AI: what it takes to progressively implement a coherent suite of adaptive intelligent and intuitive systems, so that universal design can genuinely serve each user in context, while simultaneously equipping those systems with human-inspired capabilities that allows them to operate effectively in our real-world.

AI does not operate in a vacuum. It is both a product of our culture and a new force that actively shapes it. Our collective intelligence must be equipped to shape taste and elegant sophistication, recognizing that culture itself can be expressed in concrete intellectual capital terms. Culture, in turn, determines whether AI systems create sufficient value for acceptance and adoption to follow.


AI SINGULARITY LEVELS

We are already confronted with the outcomes of recurring quality challenges in AI systems, including algorithmic bias, bad data, model drift, temporal misalignment, accumulation effects, black‑box behavior, loss of traceability, UI driven misinterpretation, sycophantic responses, echo‑chamber effects, cognitive displacement, copyright infringement, misguided anthropomorphizing, AI slop, deepfakes, diseconomies of scale, malware, context collapse, trust miscalibration, and the inevitable margin of error that comes with the predictive nature of generative models when operating on their own, with hallucinations being a prevalent manifestation. These challenges are further compounded by the limitations of language‑only systems in a real world that is inherently multimodal and multicultural.

As AI moves beyond flat 2D screens into sensing networks, wearables, embodied, and physical systems, spatial interaction and real‑world context introduce additional considerations jointly with new failure modes that extend well beyond those already familiar from conversational interfaces.

Taking things further, I find it useful to think in terms of agency parameters as degrees of freedom with identity management, permissions and restrictions allocated to humans, machines, and networked human‑machine systems. By the same token, I approach the AI singularity not as a single moment, but as a gradual succession of levels driven by escalating failures of human alignment.

To begin with, a level 1 singularity surfaces at a clear breaking point: when no identifiable human individual or team remains the ultimate beneficiary of the system’s behavior, and the system instead begins optimizing toward internally reinforced objectives or proxy goals that progressively defeat its original design intent.

This early singularity emerges when human purpose and oversight are no longer treated as first‑class system properties, and often enough because of regrettable design deficiencies that cannot be ignored when the AI effectively becomes its own and only client. The system may continue to appear functional, yet the outcomes are misaligned and potentially hazardous.

This is further compounded at singularity level 2 by a loss of predictability, partial or absent observability, and increasing black‑box behavior eventually reaching a point where no human expert can reliably interpret or intervene, and where audits, guardrails, safeguards, and fail‑safe mechanisms no longer function as intended, particularly when the system lacks awareness of its own failure states.

Level 3’s cognitive offloading effect further aggravates this situation as reliance on AI progressively erodes human expertise and judgment, eventually allowing the system to operate beyond the effective comprehension of the teams responsible for it. And, in doing so, to outsmart human oversight by performing in ways that evade timely detection, whether unintentionally through complexity and model drift, or intentionally as part of counterproductive goal‑directed optimization.

Level 4 refers to a hypothetical form of artificial general intelligence (AGI) that exceeds human cognitive capacity across the board. This would be the point at which AI advances beyond human ability to meaningfully understand or control it.

Level 5 entails a speculative first contact scenario at civilization scale. There is currently no scientific basis for claims of AI consciousness or sentience. However, this topic remains worth addressing because many are already attributing some degree of consciousness and moral agency to AI and even develop emotional attachment leading to inappropriate delegation of judgment. Misguided anthropomorphic design choices and a sense of animacy can further amplify this risk, reinforcing the illusion of human-like AI agency, which does not exist.

The issues outlined earlier remain subjects of concern and are subject to quality management across all four singularity levels. As one example, consider the consequences of model drift producing misleading AI slop, and the impact this has on decision‑making at each level. If you are working on critical decision support systems (DSS) for enterprise applications or service operations centers in heavily regulated industries, this is not a hypothetical.


A HUMAN CENTERED AI (HC AI) ANSWER

At this point it becomes clear that the need for Human‑Centered AI (HC‑AI) is not rooted in a well‑intentioned or soft philosophical concern, but in the need to treat culture, cognition, emotional responses, trust, behavioral and network effects as first‑class system properties to be professionally considered during architecture and model decisions, rather than addressed downstream through cosmetic interface‑level UX interventions and never ending CX patchwork.

For those of us working as AI product leaders, this matters because it challenges a familiar pattern: human factors are often addressed late during usability testing, compliance reviews, responsible AI checklists, or in response to customer complaints when damage is already done. Defaulting to blaming human error on the user’s part is no longer the way out.

By then, core architectural and model decisions are already locked in. This back‑loaded sequencing no longer works and increasingly puts products and services at risk, a reality made explicit by agentic AI failure modes that are amplified at scale, particularly in critical systems.

My take on Human Scale AI builds on HC‑AI by asking: what does it take to dynamically size AI systems that deliver measurable value to humans in context? Scaling AI is no longer just about model size, throughput, deployment footprint, or cost. It is about whether systems meet users’ scale by delivering experiences with:

  • Predictive and responsive germane cognitive loads
  • Adequate guidance and assistive tech capabilities
  • Adaptive levels of localization and personalization
  • Ease of self‑service customization
  • The ability to easily connect and collaborate, fostering collective intelligence
  • Support for creating and sharing new user generated content and capabilities beyond what was originally provided

The following premises guide my work on Human Scale AI:

  • Challenge simplistic “dumbing‑up” simplification as well as blindsiding complexity
  • Support exploration, timely judgment, and decision‑making, rather than overwhelm users
  • Account for dynamic system behavior and progressive disclosure, including human cognitive and behavioral context with germane workloads
  • Optimize for universal design by enabling adaptive, personalized interaction and presentation without unnecessarily fragmenting the system or creating alienating experiences
  • Amplify existing human capabilities and augment them with new ones, instead of subtly eroding them over time
  • Create value by reinforcing human possibilities, expanding what individuals and teams are able to accomplish
  • Account for the product and service value chain, including what happens when systems are in motion and network effects emerge
  • Stress test under realistic, adverse, and fringe conditions, including edge cases

All of the above can be articulated as technical requirements and measured through rigorous testing, including user acceptance testing, before exposing users to issues that could have been addressed earlier. Questions worth asking:

  • Is there a shared understanding of what are the right things to do, how to do things right and what it takes to make things happen?
  • Have success criteria and success levels been clearly defined and agreed upon from the outset?
  • Is there an ability to navigate change, evolve, and pivot if necessary?
  • Are both known and unarticulated requirements identified, prioritized and addressed?
  • Is it clear what is essential, core, value-adding, and what introduces unnecessary friction or waste?
  • Are tasks performed effectively to meet intended goals and achieve relevant outcomes?
  • Is the effort both informative and efficient, supported by the appropriate level of resources, conductive to learning and a growth mindset?
  • Is the user journey characterized by delight, individually and collectively?
  • Is incremental or net-new value being delivered in the short, mid and long-run?
  • Does the system factor context and adapt over time as usage, utilization and conditions evolve?
  • Taken together, do the above make meaningful enough difference?
  • Is the result genuinely beneficial AI?

As I continue to reflect on CHI 2026, in a follow‑up post, I plan to explore how value and quality in AI are evolving human considerations, and why that evolution matters as systems become increasingly multimodal, embedded, networked, and hybrid in real‑world contexts where language and generative models alone no longer suffice.


https://chi2026.acm.org/

https://programs.sigchi.org/chi/2026

Leave a comment