Is Your AI Ethical, Human-Centered and Pro-Social?

April 29, 2026
2,248 Views

AI tools are no longer a relatively simple search engine that is driven by marketing metrics to help us conduct our research. Rather, with AI we are using more sophisticated tools that conduct research and seek answers to our prompting while making source-selection decisions, contextual settings and semantic subtleties that impact the values expressed in the results.

As I have mentioned previously in these columns, most often I seek input from a current version of each of the three frontier models when conducting research. The three-viewpoint approach allows me to survey a variety of sources and points of view and to balance the output to address ethical and social perspectives. In the case of this article, I have hyperlinked immediately below the foundation research responses I elicited from prompts on April 19.

ChatGPT 5.4 Thinking model suggested, “In higher education, the ethically preferable AI model is not necessarily the most powerful one; it is the model that performs well enough for the use case while offering the strongest evidence of human-centered design, transparency, safety testing, and institutional controllability.”

Claude Sonnet 4.6 Adaptive model suggested, “Choosing an AI model is now an ethical act, not just a technical one. The field has moved from ‘does this work?’ to ‘does this serve?’ Your column can help deans and department chairs become informed ethical consumers—not AI engineers, but critical stewards.”

Gemini 3 Thinking model noted, “Given your recent work on maximizing returns in AI administration, shifting the focus toward ‘R-Values’ (Return on Values) is a timely and necessary evolution for the Higher Ed conversation.”

Before we look at the default values and orientations inherent in some of the leading AI models, let me remind you that in crafting your prompt, you can encourage the tool to put an emphasis on generating responses that include orientations and perspectives that address ethical considerations. Your prompt can direct the model to provide results that explore, highlight or emphasize pro-social or human-centered solutions and examples. Over time, if you include such directions in your prompts, the more sophisticated models that retain memory of your prior prompts will learn that you are interested in those values. If your preferred perspectives are not included, you can refine the responses by including a request in an iterative follow-up prompt.

Cornelia C. Walther is a visiting scholar at the Wharton School of the University of Pennsylvania and a humanitarian practitioner who spent more than 20 years at the United Nations. Her research focuses on leveraging AI for social good. Walther notes in a recent edition of Knowledge at Wharton that most research on AI models is done “exclusively through the lens of efficiency gains, cost reductions, and revenue lift.” However, Walther says, “Existing dashboards do not capture whether an AI system is fair, whether it is eroding or building trust, whether it is making the people who use it more capable or quietly deskilling them, and whether its environmental footprint is accounted for or simply ignored.”

Late last summer, Walther published an article in Forbes titled “Why ProSocial AI Is ProPlanetary AI. A Promise for Planetary Harmony” in which she explained an array of elements of assessing AI that may be used to determine the sensitivity to social good. Walther notes that pro-social AI is “not just about making AI more helpful or ethical. It’s about creating technology that is simultaneously pro-people, pro-planet, and pro-potential.”

She points to the 2025 AI Safety Index from the Future of Life Institute as an early example of such an assessment. In that index, among seven of the largest models, Anthropic scored a C-plus with a 2.64, OpenAI a C with 2.10 and Google DeepMind a C-minus with 1.76. Notably, DeepSeek scored an F with 0.37.

If you are seeking to look more closely at the AI tools that you use, including custom tools that your university may use for specific purposes, Walther suggests utilizing the key pro-social elements to create a four-by-four grid. Those elements are detailed in the Knowledge at Wharton article:

THE 4 T’s

  • Tailored: Is the AI system designed for the specific context, culture and constraints of its users—not copy-pasted from a generic template?
  • Trained: Is the system built on representative, inclusive data and objectives that encode the values the organization actually wants to promote, not proxy metrics that are merely convenient?
  • Tested: Is it rigorously evaluated for bias, robustness, and unintended consequences—before deployment and continuously afterwards?
  • Targeted: Is it applied where AI adds genuine value and withheld—deliberately—where human judgment is irreplaceable?

The 4 P’s

  • Purpose: Does the system advance a mission that all stakeholders can be proud of, beyond the next quarterly cycle?
  • People: Does it improve the experience, agency and well-being of everyone who builds, uses and is affected by it?
  • Profit: Does it generate durable financial value—not by externalizing costs onto society, but by creating genuine worth?
  • Planet: Is its energy consumption, materials footprint and systemic environmental impact accounted for and actively reduced?

Walther suggests assembling a leadership team ready to act now. The entry point is deliberately low friction. Choose one AI system currently in production, such as a customer-facing chatbot, a hiring screening tool or a demand-forecasting model, and convene a 90-minute cross-functional workshop with representatives from technology, HR, finance, legal and sustainability. Working through the four-by-four grid of 16 cells of the matrix together, score each one on a simple traffic light system: green (strong), amber (developing) or red (not compliant).

You do not need a consultant or a new software platform to do this. You need intellectual honesty, a willingness to act on what you find and the conviction that the institutions that flourish in the algorithmic age will be those that had the wisdom to decide, first, what deserves to be managed and then build the instruments to match. That 90-minute conversation is where the shift from “treasure what you can measure” to “measure what you should treasure” begins.

If your institutionwide mission or goals include ethical, human-centered or pro-social values, you must assess and, where necessary, remediate those AI tools that fall short of the collective values. Are you ready to lead the initiative to begin addressing the pro-social orientation of the AI tools you use in your university, department, college, school or division?



Source by [author_name]

You may be interested

3 Brown University Students Sue Over December Shooting
Education
shares3,361 views
Education
shares3,361 views

3 Brown University Students Sue Over December Shooting

new admin - Apr 29, 2026

[ad_1] The shooting, which killed two students, occurred in Brown’s Barus and Holley building.  Spencer Platt/Getty Images Three Brown University…

Ex-NFL player reacts to Minnesota FBI raids
Sports
shares3,237 views
Sports
shares3,237 views

Ex-NFL player reacts to Minnesota FBI raids

new admin - Apr 29, 2026

[ad_1] Federal agents raid 22 Minneapolis businesses in massive fraud investigation Federal agents are currently raiding 22 businesses in Minneapolis,…

Search suspended for missing crew members of U.S.-flagged ship that overturned during typhoon
Top Stories
shares2,207 views
Top Stories
shares2,207 views

Search suspended for missing crew members of U.S.-flagged ship that overturned during typhoon

new admin - Apr 29, 2026

The search has been suspended for five missing crew members of a U.S.-flagged cargo ship that overturned near the Northern…