So much of the conversation around AI lives in the extremes. We debate the rise of superintelligence, fear the collapse of labor markets, and wonder what friendship will mean when it becomes impossible to tell an avatar from a human being. These polarized narratives have reinforced a deeply held conviction of mine: we must design a people-first AI future. Having witnessed the societal harm unleashed by social media platforms in the last decade, I am convinced that it is our collective and individual responsibility to ensure that AI evolves under strict human guidance and in service of human flourishing.
That is why it was inspiring to hear Mustafa Suleyman, Microsoft AI CEO, just a few feet away from me, speak about the human values being embedded into Microsoft Copilot. His perspective, shared at the Paley International Council Summit which I attended last week, reflects a maturing and deeply intentional vision for what he calls humanist AI.
Mustafa Suleyman’s Humanist Superintelligence
Suleyman outlined how Microsoft is developing AI with malleable personalities, not just agentic capabilities. The company is building systems with emotional and social intelligence - AI that can engage groups, express nuance, and push back respectfully when needed. The new “Real Talk” mode, for instance, is being designed to help Microsoft Copilot challenge users appropriately rather than just flatter or agree with them. Achieving this, he noted, requires recruiting psychologists, therapists, and scriptwriters to shape more authentic interactions. This is personality engineering at scale: continuous, real-time sculpting of language and behavior, much like advertising’s emotional storytelling but applied to dynamic, responsive systems. It is worth pointing out that this is in sharp contrast to OpenAI’s ChatGPT 4.0 launch in April that struggled with excessive sycophancy.
He also discussed the risks of people misinterpreting AI as being conscious. His team has built in safeguards to prevent confusion and harm: Copilot will regularly remind users what it is and what it is not, and unlike some of the other large language models, will avoid adult interactions, and reinforce its role as a tool that serves human goals. As Mustafa put it, the goal is not to build an artificial consciousness, but to ensure that any future superintelligence remains aligned with human values rather than seeking autonomy from them.
What struck me most was his articulation of a “humanist superintelligence” - technology that aspires to elevate humanity rather than eclipse it. That framing shifts the question from “What can AI do?” to “What should AI help us become?” It is a powerful perspective, and one that corporations everywhere should adopt. Doing so will drive greater value for all stakeholders - shareholders, employees, customers, and partners alike.
Esther Dyson and the Case for Better Humans
Barely two weeks earlier, I had the opportunity to hear Esther Dyson, someone I’ve been following for decades, speak. As an early Internet pioneer, she adds another crucial layer to the humanist superintelligence conversation. As she bluntly said, “People talk about unexplainable AI when they should be more concerned about the unexplainable humans running the companies that develop the AI.” Her insight reframes the debate: the real challenge is not whether AI will replace us but whether we are ready to rise to the responsibility of being human in the age of intelligent machines. Dyson reminds us that we are analog, emotional beings: slow to think, quick to act, and that our task is not to compete with AI but to use it to become wiser, more self-aware, and more ethical.
Dyson also argues that the path to a healthy AI future is not through more coding classes or machine learning bootcamps but through human training. We need to cultivate better judgment, deeper empathy, and a stronger understanding of our own motivations and those of others. When we train ourselves to be better humans, we can direct AI toward better ends: automating the routine so that we can spend more time on what truly matters such as teaching, caregiving, mentoring, and creating.
Taken together, Suleyman and Dyson sketch two sides of the same vision. Suleyman calls for humanist AI, technology that elevates people, while Dyson calls for humanist humans, people who elevate themselves to use AI wisely. The intersection of their ideas is where our real opportunity lies: to design a future that not only serves us but also make us worth serving. The future of AI, if we choose it, can be one where intelligence is shared not just between humans and machines but among humans themselves, in the very act of becoming better at being human.
Five Takeaways for Business Leaders
- Be more human than you have ever been. Your edge in the AI era lies in empathy, emotional intelligence, and the ability to build authentic trust in a machine-mediated world. Lead with intellect and human-centered compassion. 
- Develop a human-first AI philosophy for your world. Make human flourishing as meaningful and measurable as efficiency and automation. It may feel counterintuitive, but this mindset will drive the business, and long-term value. 
- Define a human-first policy for your organization. Specify where humans must lead, orchestrate, interpret, and create as AI scales across functions, operations, and geographies. Protect human judgment and accountability by design. 
- Radically scale human education and training. Train teams not just to use AI responsibly but to excel at the skills only humans can. Make “being better at human stuff” a core competency. 
- Accelerate sandbox experimentation. Encourage experimentation across teams to explore how human-AI collaboration can unlock new forms of innovation, connection, and human-led performance. 
The next era of AI will reward those who use it to deepen their humanity, not diminish it. The companies that succeed will be the ones that design for a future where people come first. It is a choice every leader can make.
Where I’ve been
I recently had the privilege of speaking at the National Association of Corporate Directors (NACD) Annual Summit on Agentic AI and the Boardroom. This was the third time I was on stage at the NACD Summit and I represented United Rentals, a $15 billion revenue company with 30,000 employees, where I have had the honor of serving as a board director for several years. Joining me was Amit Shah, Founder and CEO of InstaLILY AI (where I’m an investor) for a conversation on how Agentic AI is reshaping corporate governance, workforce dynamics, and strategic oversight.


With more than 300 board directors in the room, we explored questions such as: Are AI agents more cost-effective and productive than human employees in the long run? How will job roles evolve as AI becomes embedded in every function? What does true AI literacy for directors look like, and when should boards rely on vendors, academics, or consulting partners for education? How can directors use AI today to review materials, assess performance, and simulate risk scenarios? Only about ten percent of those present said they currently use AI to review board materials - a signal of how early we still are.
The conversation reinforced one clear takeaway: board directors must move beyond basic AI fluency to actively driving and influencing enterprise transformation. The next phase of corporate leadership will hinge on how directors harness AI to strengthen governance, enhance decision quality, and shape responsible adoption across the enterprise. The era of Agentic AI is not just about efficiency or cost savings; it is about redefining how leaders think, decide, and lead in a world where intelligence is shared between humans and machines.
In the coming weeks, I’ll be speaking at:
- NACD AI Education - Future-Proofing Talent: The Board’s Role in Re-skilling for the AI Era. A discussion with other board members on what boards need to do. 
- AI Trailblazers Winter Summit – Co-hosting on December 4 at the Rockefeller Center in NY with an exceptional line up of speakers. Register here. 
- Corporate Events – Delivering keynotes, multi-part educational workshops and AI strategy engagements for clients across the country. 
What I’m reading
- Anthropic lands its biggest enterprise deployment with Deloitte (CNBC) 
- The Sequoia Agent Economy Playbook (Product Market Fit) 
- Mondelez to use Gen AI tool to slash marketing costs (Reuters) 
- Sora plagued by violent and racist images (The Guardian) 
What I’ve written lately
- Why Leadership with Heart Still Matters (October 2025) 
- AI’s Fork in the Road for Marketers (September 2025) 
- Is Search Really Going Away (August 2025) 
- The Myth of Creative Immunity (July 2025) 
- The Silent Shift in Marketing Leadership (June 2025) 




Thank you so much for sharing this perspective. All of this talk about superintelligent robots taking over our humanity is really something... I am glad that there are people aware enough to notice the obvious - it is us - humans - who are behind AI. And it is on us to direct our efforts to make it so that it helps us become better - not worse. But the road towards such thinking in the mainstream seems to be long still. Yet - I am glad to hear that maybe this new technology - or rather me who is quite frankly in opposition to it - can really end up benefiting humanity and - as you said - "human flourishing" at large. Yeah, to conclude, I just hope that we can actually find a way for it in the creation of a better world for everyone... Thank you for the read, I loved it 🙂↕️
Thanks for writing this, it clarifies alot. The imperative for a humanist AI, especially through integrating psychological principles into its design philosophies, feels profoundly important for building truly beneficial and ethic systems.