Business at the Digital Helm: IGF Panel Calls for Trust, Cooperation, and Action Toward WSIS Goals
- Olga Nesterova
- 1 minute ago
- 4 min read

IGF, United Nations Headquarters — December 17, 2025
At the Internet Governance Forum (IGF) session titled “Business at the Digital Helm: Steering Multistakeholder Action for WSIS Goals,” leaders from global tech companies, governance bodies, and civil society emphasized a consistent message: the digital world is fragmenting, trust is eroding, and only coordinated multistakeholder action can deliver a safe, inclusive digital future.
The discussion—co-organized by ICC Basis, the World Economic Forum, and the IGF Secretariat—looked back on the seismic technological changes of 2025 and outlined priorities as stakeholders approach the next phase of the WSIS agenda.
A Shared Starting Point: Trust Has Eroded—And Must Be Rebuilt
The first panelist, Maria Fernanda Garza, stressed that businesses, governments, and civil society must urgently rebuild trust and deepen collaboration to shape meaningful digital governance.
Panelists agreed that while each stakeholder group has different goals, no progress is possible unless each side understands the others’ incentives and constraints. Governance, safety, and opportunity are now inseparable concepts—and all three require clarity of purpose.
“What we are experiencing now is an erosion of trust that must be earned back—by governments and businesses alike,” one panelist noted.“Everyone’s objectives differ. Understanding those objectives is essential to moving forward.”
Microsoft: AI Moved From Concept to Social Tool—Now Comes the Hard Part
The Microsoft representative reflected on how rapidly AI evolved from a theoretical capability into a tool used across the world.
AI adoption has accelerated especially in the Global South, though unevenly: AI use between developed and developing countries currently stands at a 2:1 ratio.
The panelist highlighted the importance of infrastructure:
“Broadband access and connectivity are the backbone of equitable AI deployment.”
Microsoft announced they exceeded their goal of connecting 250 million people with AI-driven tools by the end of 2025, but cautioned that the next phase must avoid abstract discussions:
“We should stop thinking about AI in the abstract. The question is: what problems can it solve?”
Examples included applications in rural Africa, where AI is supporting agriculture, healthcare, and climate adaptation.
Protecting Children Online: Russia’s Alliance Raises Concerns and Tools
Nina Fedorova, representing Russia’s Alliance for the Protection of Children in the Digital Space, described the dual reality for children: unprecedented opportunity paired with unprecedented risk.
Her organization created a Child Digital Safety Index assessing risk across educational, entertainment, and other activity categories. Over the past year, they evaluated 100 tools and developed a methodology to identify priority areas for intervention. A new study across Russian regions will launch in 2026, with findings expected to guide local authorities.
Fedorova stressed that business must be involved more deeply in child protection efforts.
She detailed Russia’s hash-based illegal content detection system, automatically sharing indicators with companies and integrating with the Ministry of Interior—an example of tightly coordinated national-level enforcement.
She referenced a state-endorsed campaign, including a theatrical production portraying online risks faced by Russian teenagers, as part of Russia’s prevention efforts.
FACT-CHECK: Russia’s Narrative on Child Safety vs. Reality
While Russia’s delegation presented national child-protection efforts, it is important to contextualize them:
Russia has designated major global platforms—including Facebook, Instagram, and others—as “extremist” or prohibited, severely restricting access.
The Russian state maintains near-total control over the domestic digital ecosystem, including surveillance tools and information flows impacting both adults and children.
Online “safety campaigns” frequently coincide with government propaganda narratives and are used to justify expanded censorship.
Any portrayal of Russia’s system as a model must therefore be considered within this broader reality of state-controlled digital space, limited freedoms, and constrained civil society.
Google: AI Is Transforming Society—and Humanitarian Work
The Google representative said they remain optimistic about AI’s potential:
“We see AI as a force to improve lives—across science, medicine, and society.”
Google highlighted recent achievements:
Nobel-recognized research using open-source AI models to map proteins and accelerate drug discovery
The potential for AI to transform national GDPs, especially in developing economies
Opportunities for AI to support humanitarian crises—if access and safety are aligned
However, Google emphasized a critical sequence:
People must be online
People must be skilled and safe online
Only then can they leverage AI for social and economic benefit
The company reiterated that opportunity must be balanced with safety and “smart regulation.”
Cybersecurity Expert: Digital Fragmentation Is the New Global Risk
Another speaker, from the cybersecurity community, delivered one of the starkest assessments:
AI innovation is outpacing governance
Trust is rapidly deteriorating
Digital fragmentation—splintered systems, geopolitical blocs, incompatible standards—is becoming a critical global threat
He pointed out:
U.S. technologies are restricted in China
Chinese products are restricted in the U.S.
Russia has built a sovereign internet
India is building its own semi-closed infrastructure
“Closed systems increase risk. AI without cooperation becomes a risk multiplier.”
The expert warned that sophisticated cyber-attacks are increasing daily, while institutional capacity to govern these technologies lags far behind.
He urged the international community to focus less on dominance and more on cooperation, noting similar calls made yesterday at the UN General Assembly for shared standards.
IGF’s Role: A Crucial Equalizer for Internet Governance
Panelists emphasized that the Internet Governance Forum remains one of the few spaces where academia, governments, and the private sector engage on equal footing.
They reiterated that IGF was created by academics and must continue to involve them as the world shapes multi-stakeholder internet governance models. The group referenced multiple resolutions adopted in 2025 by the UN, G7, and other bodies, praising IGF for its role in connecting policy, research, and industry.
Looking Ahead: Nurturing the Next Digital Revolution
A central question emerged:
How do we nurture the internet revolution so that future generations benefit—and are protected—from its risks?
Panelists proposed several priorities:
Creating economic models that incentivize safe innovation
Developing ecosystems that support positive use of AI and emerging technologies
Strengthening public-private partnerships
Embedding security and trust frameworks across all digital processes
Understanding that by 2030, 90% of code may be AI-generated, fundamentally changing the workforce
Preparing for a world where AI replaces significant segments of engineering and operational labor
The collective message was clear: the digital future will not be governed by any single actor. It will require shared responsibility, meaningful dialogue, and sustained investment in cooperation.












