---
tags: vtaiwan
---
# AI 用於數位公民參與經驗共筆
## 緣起
- 在 6/3 收到 FNF 夥伴的信件,長期關注數位公民參與的組織 People Powered 正在收集應用人工智慧到數位公民參與的回饋。覺得這是一個可以輸出議題小聚經驗的機會。[name=peter]
- 相關資訊如下:
:::info
Our partner—People Powered is collecting best practices of digital participation and AI applications that have resulted in clear benefits for governments, organizations, and communities to facilitate people's participation, particularly from Africa, Asia (Of course, for friends in Taiwan, it definitely includes Taiwan, where our hub is located), Eastern Europe, and Latin America. Selected cases will be part of the new digital participation guide and that we are developing with People Powered. If you or your offices know/have done any great practices from your offices or https://www.peoplepowered.org/ your offices' partner organizations, please feel free to submit it to Submit here by June 10.
:::
## 我想要參與的話,我可以怎麼做?
- 你可以直接在下面打上你想要給出的建議。
- 如果你的建議牽涉到特定的情境,建議好好說明相關的情境。
## 意見收集區
### Joshua Yang:
From vTaiwan's perspective, AI works most effectively when different tools handle distinct democratic challenges. Rather than seeking one universal solution, we match specific technologies to particular participatory problems based on what we've already implemented and what we envision for the future.
Our current approach focuses on how AI technologies work together across the democratic process. We currently deploy different tools at two main stages: first, mapping citizen opinions through Polis; then facilitating deeper discussion through LLM-monitored deliberation. Each stage addresses specific barriers to meaningful participation, whilst we envision expanding this toolkit to include better information-gathering tools.
**Understanding:** Polis currently handles broad participation through intelligent consensus-finding that actively encourages perspective-taking. Our existing system does far more than simply map opinions. Polis deliberately nudges participants towards greater understanding of alternative viewpoints by strategically presenting statements from people who hold different views. This creates a dynamic learning environment where people often discover common ground they didn't know existed, or develop more nuanced understanding of why others hold different positions. The approach effectively processes large-scale input: by 2020, vTaiwan's mailing list included 200,000 individuals Lessons From Consensus Building in Taiwan, revealing not just where people currently stand, but helping them explore different perspectives through respectful engagement.
LLMs currently analyse these Polis clusters to understand the reasoning behind different opinion camps. Once Polis has mapped public opinion, we deploy large language models to examine what distinguishes different groups of participants. The LLMs process the specific statements that each cluster agreed or disagreed with, identifying the underlying values, concerns, or assumptions that drive different positions. This analysis reveals not just that people hold different views, but why they hold them—whether disagreements stem from different priorities, different factual understandings, or different experiences.
**Deliberating:** During face-to-face deliberation, LLMs currently monitor conversations to track how understanding evolves. Armed with insights about different opinion camps from the Polis analysis, LLM technology monitors live discussions to identify when participants begin to understand alternative viewpoints or when new consensus emerges around unexpected solutions. This creates a feedback loop from opinion mapping through analysis to deep deliberation.
**Learning:** Future developments focus on realising our vision for enhanced information-gathering before participation begins. We're exploring AI agents that could autonomously research policy questions and interactive LLM-based documents that would allow citizens to explore issues through conversation-style interfaces. These envisioned tools would strengthen the preparation phase, ensuring participants arrive at both Polis voting and face-to-face discussions with robust understanding of technical details and broader context.
Human validation ensures democratic accountability, particularly around AI-generated summaries. The most critical oversight occurs when AI systems summarise both the Polis cluster analysis and subsequent face-to-face conversations. Participants vote on these summaries to verify they accurately capture what different opinion camps actually believe and what was genuinely discussed, ensuring AI cannot misrepresent human voices.