In November 2025, Nicole Brachetti Peretti, Founder of NJF Holdings, wrote a recent article for TIME, responding to Pope Leo XIV’s call for AI developers to “cultivate moral discernment” and develop systems reflecting justice and reverence for life. While some tech leaders, including Marc Andreessen, have mocked such calls, Nicole argues this dismissal is a mistake: we don’t just need AI regulation, we need AI morals.
Why regulation isn’t enough
Nicole outlines how governments are scrambling with initiatives like the EU AI Act, but regulation alone cannot tell us what kind of world we want to build. Rules answer “how” but rarely “why.” Ethics treated as compliance becomes sterile risk management rather than moral reflection. The deeper question, she argues, isn’t whether machines can think, but whether humans can still choose when algorithms already shape what we read, invest in, and trust.
The human element
The article challenges tech’s tendency to frame ethics computationally, alignment, safety layers, feedback loops. Nicole notes that conscience isn’t a parameter to be tuned; it’s a living capacity grown through empathy, culture, and relationship. This essence of human moral development cannot be replicated by computation.
Capital’s role
Nicole emphasises that what gets funded, gets built. Ethical due diligence should become as routine as financial due diligence, asking not just how large a technology might become, but what behaviour it incentivises and who it leaves behind. This is pragmatic foresight: trust will be the scarce commodity of the AI century.
Nicole concludes that the moral project ahead isn’t teaching machines right from wrong, it’s reminding ourselves that human dignity should be a design principle. “The future will be shaped not by the cleverness of our algorithms, but by the depth of our moral imagination.”
Read the full article on Time Magazine here.