On the afternoon of March 10, 2025, the eighth session of the "Reflections on Human Rights" academic forum series, hosted by the Student Union of the Institute for Human Rights, was held in Room 1015 on the 10th floor of the Comprehensive Library Building at the Xueyuanlu Campus of China University of Political Science and Law. This forum invited Cao Lijun, a PhD candidate in Constitutional and Administrative Law at Beijing Institute of Technology Law School, as the keynote speaker. Cao Shengmin, Associate Professor at the School of Marxism, Ocean University of China, served as the commentator. The panel discussion included PhD candidates Huang Yanteng, Liang Jinchen, and Zhang Jingjing from Beijing Institute of Technology Law School, as well as Tian Aoni, a PhD candidate from the host institute. Master’s students from the institute attended as observers.

I. Keynote Report
The keynote speaker, Cao Lijun, delivered a report titled "Governance Model of 'Algorithmic Colleagues' in Automated Administration", divided into three sections: the theoretical connotation and value orientation of the "algorithmic colleague" governance model, its functional advantages, and its institutional safeguards.

In the first section, Cao Lijun explained the theoretical connotation and value orientation of the "algorithmic colleague" governance model. He emphasized that this model centers on human-machine collaboration, transcending the limitations of traditional "instrumentalism" and "subjectivity" paradigms to establish an equal cooperative relationship between administrative entities and algorithmic systems. Through a responsibility alignment mechanism, the model clarifies the accountability of administrative entities, requiring them to take responsibility for algorithmic decisions and to integrate practical considerations in discretionary administrative actions to mitigate technological risks. He further highlighted the model’s supplementary value to the "human-in-the-loop" mechanism: replacing competition with cooperation, enhancing supervisory efficacy, reinforcing human agency, clarifying administrative functions while safeguarding the rights of administrative counterparts, optimizing human-machine relations, and advancing government digital transformation.
In the second section, Cao Lijun outlined three functional advantages of the model:Alignment with human-centric digital governance: By clarifying functional divisions and technical decentralization, it prevents the "black-boxing" of algorithmic power and ensures clear accountability. Recognition of algorithms as "non-human actors": This shifts administrative legal relations from a binary to a triadic recursive structure, simultaneously improving digital literacy and protecting the rights of administrative counterparts. Reflexive governance: Encoding rule-of-law values into algorithmic systems through human feedback and reinforcement learning enhances efficiency and aligns algorithmic power with public interest, ensuring transparency and accountability in digital administration.
In the third section, Cao Lijun discussed institutional safeguards. He stated that responsibility alignment should apply only to discretionary and burdensome administrative actions to balance administrative efficiency and rights protection. Additionally, a "negative list + dynamic adjustment" mechanism should be established, where central authorities regulate the boundaries of algorithmic involvement while local governments flexibly adjust its depth. This prevents the erosion of administrative discretion by algorithmic bureaucratization and avoids the loss of grassroots decision-making autonomy. To address reduced transparency due to algorithmic "black-box" effects, the "algorithmic colleague" model emphasizes interpretability and deep reasoning models to improve understanding of decision-making logic. Reverse engineering and algorithmic mimicry optimization are proposed to enhance algorithmic credibility and ensure human-centric service delivery.
II. Panel Discussion
During the discussion, panelists provided critiques of the keynote report.
Huang Yanteng commented on the responsibility alignment and "human-in-the-loop" governance model. She acknowledged the innovation of using responsibility alignment to rebalance power and define administrative functions but suggested further elaboration. Regarding the "human-in-the-loop" mechanism, she argued that "comparing humans and algorithms" contradicts the principle of human primacy. Instead, the focus should be integrating human judgment into algorithmic decision-making rather than contrasting intelligence levels. Finally, she noted that while collaboration with "algorithmic colleagues" is emphasized, their independence warrants further exploration.

Liang Jinchen praised the theoretical innovation of the paper, particularly the "algorithmic colleague" concept and governance model. However, he noted an overreliance on abstract concepts without concrete examples, weakening practical relevance. On algorithmic interpretability, he emphasized the importance of technical interoperability between systems and the comprehension of law enforcement personnel rather than human-algorithm understanding. He also critiqued the paper’s treatment of the balance between rules and reasoning models in automated administration, calling for further scrutiny.

Zhang Jingjing commended the paper’s concise language and clear structure but highlighted insufficient explanation of the "responsibility alignment" concept, which risks confusion as a core idea. She urged clearer differentiation between the "human-in-the-loop" and "algorithmic colleague" models, emphasizing dialogue over confrontation between humans and algorithms. On accountability, she stressed the need to clarify technical responsibility attribution, particularly for administrative counterparts seeking redress. She recommended analyzing challenges and issues of algorithmic governance before proposing institutional safeguards to strengthen rigor.

Tian Aoni recognized the paper’s relevance to debates on algorithmic threats to human agency but suggested clarifying the foundations and objectives of new concepts, as well as their relationship to existing theories. She noted weak connections between some content and the core theme, advising explicit analysis of China’s administrative culture and the feasibility of the "algorithmic colleague" model. She also called for clarifying the model’s relationship with instrumentalism and improving punctuation and section headings. Overall, she deemed the paper coherent but incomplete.

III. Expert Commentary
Cao Shengmin provided feedback on the report and paper. He acknowledged the innovative exploration of the "algorithmic colleague" governance model but identified areas for refinement. The discussions on human-machine collaboration, accountability distribution, and algorithmic overreach demonstrated depth, particularly regarding responsibility alignment and administrative discretion. However, the "algorithmic colleague" concept required stronger integration with existing theories and practices to transcend the instrumentalist-subjectivity binary. He urged deeper analysis of algorithmic decision-making transparency, interpretability, and legal liability frameworks to enhance theoretical and practical value. He praised the paper’s language and substance but noted room for theoretical development, especially in comprehensive algorithmic governance research. He expressed optimism about the paper’s potential and anticipated further revisions.

Finally, Cao Lijun briefly responded to the panelists’ and commentator’s suggestions.

The "Reflections on Human Rights" Academic Forum was independently launched by the Student Union of the Institute for Human Rights at China University of Political Science and Law in September 2023 under institutional guidance. Its name, meaning "collective reflection and intellectual refinement," reflects its mission to foster a student-led, open, innovative, and inclusive academic platform addressing contemporary issues.