Designers shape far more than products or interfaces. They shape how people act, decide, and relate to the world around them. From recommendation systems to account settings, design quietly guides everyday behavior. Because of this, design is never neutral. It always carries ethical consequences, whether designers intend them or not. This article reworks selected ideas from the author’s research, Rethinking Design Ethics from the Perspective of Spinoza, for a general audience.
Featured
Translation: Turning Research into Impacts
University Libraries: Your Gateway to Citizen Science Success
Are you tired up of travelling across the country to various sites for data collection? Imagine having thousands of eager, geographically distributed volunteers ready to contribute to your research project, and what if these volunteers are people who are already engaged learners, information seekers, and capable of following standard data collection protocols. This resource exists right now in every university library across Sri Lanka.
AI in a Tribal Context: Diverse Perspectives Matter in a Changing Landscape
With Artificial Intelligence’s (AI) seemingly increasing integration into various aspects of society, nations worldwide—including Tribal Nations—are assessing its impact on the changing landscape. AI is a revolutionary technology that poses potential opportunities and risks for federally recognized Indian Tribes (Tribal Nations or Tribes) and their citizens. This article provides an overview of the literature related to AI in a tribal context.
Problems With Archiving and Replaying Web Advertisements
Advertisements are an integral part of our cultural heritage, and this extends to online web advertisements. Unlike print ads, web ads are dynamic and interactive, which makes them difficult to archive and replay (i.e., load archived web resources in a web browser) successfully . To explore these challenges, we created a dataset of 279 archived web ads. During this study, we identified five major problems with archiving and replaying web ads, which we discuss in detail in our article “Problems With Archiving and Replaying Current Web Advertisements”.
AI in the House of God: A Threat, Tool or Transformation?
Artificial Intelligence (AI) is reshaping nearly every corner of human life—from classrooms to hospitals and corporate offices. But what happens when it enters the house of worship? Can a machine deliver the word of God, or does this cross a sacred boundary? Our study, “AI in the House of God: Threat, Tool or Transformation?” (ASIS&T 2025), explores how people respond to AI-driven sermons and what this means for the future of faith and technology.
When YouTube Meets Grandma: How Older Adults Perceive and Influence AI Recommendations
Behind every “You May Also Like” video sits an algorithm — an invisible curator quietly shaping what we see and what we don’t. Most of us accept this invisible hand as usual. We know it’s there, even if we don’t fully understand it. But for older adults, those who first learned about the world through newspapers, radio, and television, the logic of these recommendations can be confusing.
Frontiers: Reaching Beyond Human Knowledge
Governance as Code: How AI is Enforcing Information Policies Directly in the Tech Stack
Traditional governance models, reliant on static documents and manual reviews, are fundamentally incompatible with the velocity and complexity of modern AI and software development. This paper examines the paradigm of “Governance as Code” (GaC), a transformative approach that embeds information policies, ethical guidelines, and compliance controls directly into the technology stack. By translating human-readable rules into machine-executable code, GaC enables proactive, automated enforcement within DevOps and AIOps pipelines. We explore practical implementations such as AI guardrails that filter sensitive prompts and automated risk-tiering systems that streamline project oversight.
MemGPT: Engineering Semantic Memory through Adaptive Retention and Context Summarization
In traditional information systems, preservation is paramount, deletion represents data loss. MemGPT challenges this paradigm by implementing strategic forgetting through two key mechanisms: summarization and targeted deletion.
Forging a Middle Path: Canada’s Moment to Lead in AI Governance
Today, with the AI landscape evolving rapidly, especially with the explosive advancement of generative AI technologies, Canada finds itself pulled between two global powers: the United States, favouring open innovation, and the European Union, doubling down on strict AI regulation. Canada does have a proposed Artificial Intelligence and Data Act (AIDA), introduced in 2022 as part of Bill C-27, which aims to regulate high-impact AI systems. However, AIDA is still under review and has yet to be finalized, leaving a critical gap in national legislation.
Opinion: Scholarly Ideas, Scientific Curiosity, Bold Propositions
The Human Stack: Why Soft Skills Are the Ultimate Competitive Advantage In Tech
In an era defined by rapid technological advancement, the most durable competitive advantage may be fundamentally human. This article introduces the concept of the “Human Stack” the layered suite of soft skills like communication, empathy, and adaptability that employees bring to an organization. While technical skills are essential for entry, it is these uniquely human capabilities that drive true differentiation and success. As automation encroaches on routine tasks, soft skills become critical differentiators that machines cannot replicate.
Co-Pilot, Not Autopilot: Navigating the Future of Qualitative Research
Recently, a group of 3 published an open letter signed by over 400 qualitative researchers. Their message was clear: “Keep AI away from our work.” They argue that using AI to find themes in data kills the “reflexive” process, the deeply human act of interpreting nuance, emotion, and meaning. I have spent my career championing the human touch in the EdTech works and research. I understand their fear. However, I believe this total rejection is a mistake. We should not be banning Gen AI; we should be teaching researchers how to master it.
Can AI Have a Conscience? A Look at Ethics in Machine Learning
Can AI have a conscience? Of course, today’s AI isn’t a sentient being with feelings or guilt. It won’t lose sleep over a tough decision. But as artificial intelligence plays a bigger role in our lives, we do expect it to act responsibly. In essence, we want AI to follow ethical principles, a sort of programmed “conscience” so that it helps society without harming it. This is the crux of AI ethics, an increasingly important topic now that machine learning systems are making decisions that matter.
Education: Bring Learning and Enlightenment to your Life
Exploring the Future of Human–AI Collaboration: Insights from “Human–AI Interaction and Collaboration”
How should people and AI work together in ways that are useful, ethical, and trustworthy? Edited by Dan Wu and Shaobo Liang (Wuhan University), “Human–AI Interaction and Collaboration” maps the fast-moving terrain where users, systems, and information meet—treating human strengths and machine strengths as complements, not substitutes. The introduction frames collaboration as a user-centered endeavor that must balance capability with ethics, transparency, and trust.
AI Literacy Is Information Literacy: One Academic Library’s Plan for AI Instruction
Artificial intelligence, specifically Generative AI, is a topic that cannot be ignored in education, regardless of the level. As a library director at a four-year private university, I believe it is our duty as librarians to meet the challenge of AI head-on and meet the instructional needs AI creates. Using generative AI is a skill that both students and faculty should be trained in, including the proper uses of AI and how it can be used as a tool, while also being clear how to use it in a positive way. Libraries are uniquely positioned to do oversee this effort on college campuses, because AI literacy is information literacy, and librarians should lead.
The Complexity of Ethic Centered AI Literacy in Higher Education
One of the main challenges which academic librarians face when trying to develop programs and services that support AI literacy is the wide array of stances taken by institutions, and individual faculty members, when it comes to teaching with and about AI tools. These dimensions are not only applicable to students, but also to faculty, who also need help and guidance navigating the new technologies. One aspect which becomes central to this conversation is promoting the ethical use of AI tools in the academic environment. Although the topic has gathered considerable attention in recent conversations, these remain fragmented and divisive.
Preparing Information Professionals to Educate Users on Generative AI: Best Practices from North Carolina Central University
The pace of adoption of generative AI has been groundbreaking—faster than the adoption of personal computers and the internet. Advocates argue that AI can help bridge digital literacy barriers and provide non-experts with access to specialized information, from coding assistance to digestible legal and medical information. It supports learning across the educational spectrum, from K-12 through higher education and workplace training. While we must accept that students will inevitably find and use generative AI tools, two critical issues demand attention: first, disparities in AI acceptance and use among students and faculty could deepen existing digital divides and affect educational and career outcomes; second, within higher education institutions, questions remain about whose responsibility it is to teach students how to use AI tools effectively and ethically.
Teaching Library Users About AI Images: A Case Study
AI-generated images and videos are now frequently found across social media, advertising, and academic spaces, yet many users interact with these visuals without recognizing them as AI or understanding how they are created. As academic libraries increasingly position themselves as leaders in information and digital literacy, AI image literacy presents both a challenge and an opportunity. To help our patrons better understand, I developed and facilitated an AI image literacy workshop focused on helping participants critically evaluate AI-generated images and videos while also understanding how they are made.
From Information Literacy to AI Literacy: Preparing Librarians for Emerging Responsibilities
As artificial intelligence reshapes how we search, write, and learn, librarians are increasingly expected to help communities navigate an unfamiliar digital landscape. This article advocates for incorporating AI literacy into Library and Information Science education and introduces a new course, “AI and Libraries,” designed to prepare future-ready information professionals. It emphasizes that AI literacy is critical for promoting equitable understanding and access in an age defined by intelligent systems.
Spanish: Contenido en español
¿Qué significa ser inteligente?
En las últimas dos décadas, la IA ha avanzado a pasos agigantados hasta el punto de que estamos hablando seriamente de que es tan inteligente como los humanos o incluso que los supera. Pero antes de dejarnos llevar demasiado por esa emoción o ese miedo, es importante que miremos hacia dentro y nos preguntemos: ¿qué nos hace inteligentes, conscientes y humanos?
Obsesión con los Robots Asesinos como Autorreflexión
Robots asesinos. Inteligencia artificial distópica. Rebeliones de máquinas. ¿Por qué nos fascina tanto la idea de que nuestros aparatos se vuelvan contra nosotros? En este artículo, desentraño nuestra obsesión con el ascenso de las máquinas y demuestro que gran parte de su atractivo es en realidad la proyección de nuestros propios miedos y dilemas.
Desarrollo de competencias de búsqueda de información en línea en escolares
las competencias para la investigación en línea de los estudiantes en Chile, no necesariamente están suficientemente desarrolladas para abordar sus tareas escolares.
Chinese Content
具身认知与体验测量实验空间:信息行为研究的新变革
武汉大学引入前沿数智技术,率先打造了“具身认知与体验测量实验空间”(Embodied Cognition and Experience Measurement Commons,ECEMC),为信息行为研究的实验范式带来了创新性的变革。该物理空间由三个功能分区组成,分别由不同的技术框架支撑。
数字演绎剧场:文化遗产数智活化实验方法论框架
数字演绎剧场:文化遗产数智活化实验方法论框架 王晓光,赵珂 数智时代,文化遗产的数字化与活化利用不仅体现在技术应用层面,更深刻影响着科技与文化的融合方式。数字人文实验室作为整合数字文化遗产资源、工具与跨学科研究方法的多功能空间(Pawlicka-Deger,2020),能促进协作创新、公众参与,推动文化遗产的数字保存、跨学科研究与广泛传播。但当前数字人文仍缺乏设计理论与方法实践,偏重图形化呈现与界面发布(Liu,2013),尚未达到支持知识发现与生成的方法层面(Burdick等,2012,p.14),亟需为文化遗产数智活化寻求新的交互式参与设计方法。在数智技术与数字人文理念的驱动下,中国武汉大学文化遗产智能计算教育部哲学社会科学实验室建设了文化遗产数字演绎剧场。实验室的剧场是一种革新理念,使文化遗产得以融入数智环境,为人文研究提供新型实验空间。此类剧场可视为融合物理实体、虚拟空间、工具资源应用、研究协作与实践空间表征的情境化实践(Oiva,2020)。在此新视角下,剧场与数智的融合持续推进,能够助力构建整合人、物、技术的剧场化实验空间。具体来说,文化遗产数字演绎剧场是探索人文实验新范式的尖端跨学科实验空间,基于三维沉浸投影、交互大屏、XR叙事、高级建模等工具设备,整合历史、信息管理、人文、文学、艺术与人工智能等多领域技术方法,构建“数字孵化-虚实融合-协同演绎-沉浸体验-智慧服务”的完整创新链(见图1),支撑文化遗产的活化利用与研究实践,激活其蕴含的历史、文化、科学和艺术知识。 图1:文化遗产数字演绎剧场创新链 核心概念与形式 数字孵化。数字演绎剧场利用数字化采集设备和高性能计算设备,对文化遗产进行记录建档,构建开放共享的知识图谱、文本数据库、图像数据库、3D模型库、文化基因库、文化遗产GIS平台等文化遗产智慧数据。这些数字化的资源为文化遗产的保护、传承和创新提供了强大的数据支撑。 虚实融合。数字演绎剧场利用数字孪生方法、GIS技术以及增强现实设备,将文化遗产实物与其阐释性信息和知识进行进行精准关联、映射与叠加,实现现实世界与数字空间的虚实融合。 协同演绎。数字演绎剧场利用人智协同、开放众包、数据挖掘、模拟推理、艺术想象等方法和手段,对文化遗产进行演绎加工,实现历史与文化场景的复原再现。通过协同演绎,不同领域的专家和参与者可以共同合作,以跨学科的方式进行研究和教学,从多个角度对文化遗产进行全面理解和呈现。 沉浸体验。数字演绎剧场借助XR设备与交互影像系统,为用户提供拟真的、可交互的三维可视化叙事环境。用户能够以数字化身的形式进入文化遗产的虚拟场景,与其中的人物和环境进行互动,从而获得身临其境的感官体验。 智慧服务。数字演绎剧场借助AIGC技术和以ChatGPT为代表的大语言模型,实现文化遗产相关的内容生发、智能对话、组织检索与出版传播等服务。通过这些智慧服务,数字演绎剧场能够为中华优秀传统文化的焕活再生和永续传承提供支持,满足用户的知识需求,并提供个性化的交互和参与体验。 技术架构 文化遗产数字演绎剧场以数据为基础、模型为引擎、技术为支撑,对文化遗产进行全方位、多维度的分析、表达和呈现,建立了具有适应性和灵活性的模块化技术架构,为文化遗产的数字活化提供保障。 数据分析和管理。利用统计分析、社会网络分析、机器学习等专业知识,将文化遗产数据进行处理和分析,建设数据库、模型库、基因库等资源库,为文化遗产数字活化提供可利用的数据资源。 模型开发和应用。基于数据分析和管理结果,运用各种算法和技术进行数据挖掘、模型构建和优化,包括知识图谱模型、数实共生模型、演绎者协作模型、数字叙事模型、AI大语言模型等,以支持文化遗产的演绎和呈现。 软硬件支持。增强剧场的适应性和使用灵活性,包括智能语音对话平台、多屏联动系统、项目管理系统、可视化软件等。 情景模拟和叙事开发。让用户通过角色扮演或叙事参与,探索并了解不同类型的文化遗产演绎形式,加深对文化遗产的理解,使其成为更有效的知识创造者。 交互式展示和体验。为用户提供多元化的交互式展示和体验方式,包括虚拟现实、增强现实、互动展览等,以更好地呈现文化遗产的历史、文化和价值。 研究与教育。采取交互式图形仪表盘、探索性数据地图、时间线或者三者的组合等形式,根据用户需求、以用户友好的方式设计交互,使专业人士和非专业人士都能够参与研究、数据探索、计算建模、访问体验,促进文化遗产研究和教育。 可持续发展管理。建立健全的管理体系,包括人员培训、技术支持、质量控制等,保障文化遗产数字演绎剧场的可持续发展。 原文信息:Wang,
像人类一样感知:异构图中的结构因果模型学习
近年来,随着复杂系统建模需求的增长,异构图神经网络逐渐成为研究热点。然而,现有方法通常存在固定推理流程和虚假相关性等问题,限制了模型的可解释性和泛化能力。为此,我们提出了一种新颖的异构图学习框架,通过模拟人类感知与决策过程,增强对任务的预测性能与结果解释能力。 核心研究思路 我们提出的方法基于以下关键步骤: 1.语义变量构建:通过图模式与元路径提取出易于人类理解的语义变量,例如论文、作者和会议等。2.因果关系挖掘:采用结构方程模型,自动发现变量之间的因果关系,并利用学习到的因果关系进行预测。3.目标任务预测:通过逆推算法,将语义变量的因果关系转化为目标任务的预测结果。 数据集与实验验证 研究在三个公开数据集(DBLP、ACM和IMDB)上进行了验证,这些数据集涵盖学术、社交和电影推荐等多种场景: DBLP:预测作者的研究领域。ACM:预测论文的学术类别。IMDB:预测电影的类型。 通过引入三种偏差(同质性、度分布、特征分布),验证了模型在不同数据分布下的泛化能力。实验表明,提出的模型在所有设置下表现稳定,泛化能力显著优于现有方法。 主要研究成果 1.泛化性:模型在不同偏差条件下保持较高的预测性能,表现出卓越的适应能力。 2.可解释性:通过因果关系图清晰展示变量间的影响逻辑。例如,作者研究领域与发表论文的会议密切相关,模型捕捉到这一直观规律,并验证了因果推理结果与专家经验的一致性。 实际应用前景 该框架在技术挖掘、金融分析、政策评估等领域具有广泛应用潜力。例如,通过因果关系图辅助技术创新决策,或在金融分析中解释影响因素以提升透明度。此外,其高度可解释性为改进模型逻辑、提升可信度提供了可能性。 总结 异构图学习正在为复杂系统建模与分析带来新的可能性。本研究通过结合因果推理技术与异构图神经网络,不仅实现了对复杂任务的准确预测,还增强了对模型推理过程的理解,为人工智能在实际问题中的应用提供了新的视角。 本文基于以下成果写作完成: Lin, T., Song, K.,