Issues

What is a Human Internet?

 

Our vision for an online community is modeled after the principle that defines democracies: one human, one voice. One user, one equal voice online. We believe in the potential for an internet that serves society—centered around human rights and free from the influence of bots and fake accounts. This is a human internet. Restoring human voices with real privacy is the primary mission of the Foundation for a Human Internet. We believe individuals should have control of their data and have a right to safeguard it from abuse and surveillance. By making the internet a more democratic space, we can help it reach its full potential for communication, sharing of information and healthy debate. 

There is a dire need to act now for a human internet. Fake news, surveillance states, and oppression of free speech continue to jeopardize democratic institutions and human lives. Big tech monopolization limits human power and enables polarizing misinformation. When fake accounts, coordinated bot networks, and misinformation dominate, a human internet is unattainable. We must ensure that humans—not malignant bots, wealthy elites, governments, or corporations—are in charge. Together we can create an internet that works for us.

Global Erosion of Democratic Values

  • Disinformation and propaganda on a massive scale pose an existential threat to democracies worldwide.
  • The Foundation for a Human Internet’s mission is to target this global threat.
  • Access to truthful, reliable information is essential for democracy, and the internet has the potential to improve such access.
  • However, if the current trajectory continues, democracies will remain vulnerable to authoritarians wielding the internet as their weapon.
  • Manipulating social media to undermine democracy is a real threat, and anyone with power and access to software engineers can do it.
  • Tek Fog is an example of a secretive app used to artificially inflate the popularity of the ruling party in India, and such practices are not isolated.
  • Coordinated Inauthentic Behavior (CIB) can give disproportionate influence to a few voices, which undermines fundamental democratic principles.
  • Democracy works when voices exist on a level playing field, but CIB can create an uneven playing field.

    Digital Authoritarianism

    • Authoritarian regimes use sophisticated bot network campaigns to further their agenda and drown out dissenting voices under the guise of “national security.”
    • Digital Authoritarianism describes the tactics deployed by dictatorships and autocracies to control the flow of information.
    • Technological advancement has given state regimes tools to efficiently surveil their citizens and create state-favored bias in public discourse. For example, some authocratic leaders commit genocide to consolidate power, while convincing their unwitting citizens it was done to fight terrorism.
    • Anonymous social dialogue platforms are needed to combat digital authoritarianism and allow people to express disagreement outside the scope of state surveillance.
    • Failure to provide such platforms could lead to ever-expanding state control and ever-shrinking individual liberty.”
    • Autocratic states like Myanmar, Hong Kong, Saudi Arabia, North Korea, and Russia provide foreboding examples of our potential online futures.
    • The more digital power autocratic states develop, the better equipped they become to exert their influence internationally.
    • The Foundation for a Human Internet prioritizes and protects users’ privacy to help prevent suppression of dissent in online forums and keep these online spaces safe for organization and discussion.

      Dangers of Disinformation

      • The integrity of information has been damaged by computational propaganda and the use of automated accounts to manufacture credibility, which has become a global crisis.
      • The lack of accountability online and the ability to create duplicate accounts and fake news sites quickly have created a toxic environment on social media platforms, making it difficult for people to agree on basic facts and trust online content.
      • Disinformation not only creates division in society but also threatens journalism by making it harder for honest journalists to compete with scandalous lies that generate more traffic.
      • Fact checking adds financial burden onto news sites, making it difficult for smaller media companies to maintain their truth-seeking efforts.
      • The rise of AI makes it harder to identify fake accounts and fake news, and everyone is susceptible to disinformation, so we must all be vigilant.
      • Current content moderation approaches are increasingly hopeless, so we need more systematic and scalable solutions to address disinformation as an international crisis.
      • Creating a modern human internet is a core solution to these problems, rather than trying to solve problems created by the current internet as they arise.

      Content Moderation is Not the Solution

      • The Foundation for a Human Internet does not condone racism, intolerance, hate speech, science denial, or spreading of misinformation.
      • Instead of censorship, the Foundation promotes a two-part solution:
        1. Achieving a human internet with every person having one voice
        2. Establishing a merit-based infrastructure in online social dialogue where the opinions and arguments of humans are vigorously debated, disputed, and judged by other real people
      • New, bot-resistant social networks can create accountable communities managed by algorithms that prioritize human well-being without maximizing polarization
      • Censorship and content moderation are temporary solutions that are ineffective long-term for fighting bot networks and disinformation
      • Tech giants are active censors under the guise of “moderating content,” while governments and bad actors manipulate media and innovate propaganda tactics
      • Facebook has demonstrated that they are unable to successfully censor the flow of information
      • Content moderation policies are designed to give platforms broad discretion, which can translate into inequitable enforcement practices
      • Removing fake accounts often goes against the financial interests of the company, as it generates revenue from advertisements based on the number of users
      • Studies show shocking numbers of fake users on Facebook, such as 30-50% of Facebook users being fake

        Privacy is a Human Right

        • In the real world, people have multiple identities and express themselves differently in different communities, where they can manage who is listening.
        • Blurring the lines between communities can be a sign of an autocratic government, as seen in the Soviet Union where neighbors were incentivized to report anti-government sentiments.
        • The Foundation for a Human Internet envisions an online future that respects privacy as a core value and follows privacy by design principles.
        • Privacy is paramount to a functioning democratic society as it allows us to control who knows what about us and vary our behavior with different people to maintain and control our social relationships.
        • Privacy and free speech are two sides of the same coin, as without assurance of privacy, people may fear social, professional, or life-threatening consequences for expressing their opinions.
        • A non-private platform can never hope for the same level of honesty in public discourse as one that enshrines privacy of the user as a core value.
        • We must carefully consider who we entrust with the responsibility of ensuring privacy on the internet, as governments may not truly prioritize privacy and for-profit companies may focus on shareholder value over privacy protection.

        Accountability with Anonymity

        • The Foundation for a Human Internet believes that anonymity and accountability can coexist online.
        • By-site anonymity allows people to have different identities depending on the situation, promoting privacy and enabling honest conversations on sensitive topics.
        • The right to remain anonymous is a fundamental component of free speech and has been protected by the U.S Supreme Court in cases such as McIntyre v. Ohio.
        • Anonymous platforms democratize discourse, create space for marginalized voices, and promote inclusion.
        • Online problems often attributed to anonymity, such as bullying, are a result of a lack of accountability.
        • Communities can set and enforce their own rules within their community without tracking identities across platforms.
        • Reducing anonymity is not the solution to increasing accountability.
        • Protecting online privacy requires guaranteeing anonymity and rejecting government and corporate interests lobbying for unlimited data collection.
        • Anonymous democratic participation, such as voting and political polling, gain meaning from the number of active participants, not from the identities of those participating.
        • Protests are also an important anonymous way for individuals to express their beliefs, although digital surveillance threatens the anonymity of disguised dissenters.

          The True Costs of Data Breaches: How Secure is Your Data?

          • Data leaks can happen to anyone, even to the CIA, posing a serious risk to national security.
          • The only way for a company or organization to protect data is to not collect personal identifiable information (PII) in the first place.
          • Data breaches can cost businesses millions of dollars. In 2021, data leaks cost enterprises an average of 4.2 Million and small businesses are being increasingly targeted, often leading to bankruptcy. This causes loss of trust and reputation among consumers. In 2018, the National Cyber Security Alliance reported that “60 percent of small and midsize businesses that are hacked go out of business within six months.” In 2019, the estimated “Cost of a Data Breach” rose to $8.19 million on average for US companies.
          • Data leaks can have serious consequences for communities discussing sensitive topics, such as mental health, political activism, sexual abuse, systemic racism, gender, or sexuality. Anonymity can provide a safe space for marginalized communities to share information.
          • Governments can compel companies to share data, as seen in high-profile court battles like Carpenter v. United States. In this case, the Supreme Court decided that the government needs a warrant to compel companies to access cell site location information from a cell phone company. However, other disputes, such as those between the between the FBI’s and Apple surrounding iPhones, have shown that user privacy does not always win.
          • The simplest way to prevent both data theft and being forced to heed to demands from governments is to not collect PII in the first place.

            A Nonprofit Solution: Trusted Identity Layer

            • The Foundation for a Human Internet is a nonprofit because for-profit corporations have incentive to exploit data for profits.
            • Big tech companies consolidate and generate record profits while limiting privacy options and leaving individuals vulnerable.
            • Facebook and TikTok hand over user information to governments, while Zoom changed their encryption definition for profit.
            • The data brokerage market is projected to reach $345 billion by 2026, and Facebook earned $112.39 billion and Alphabet Inc. (Google) earned over US$53 Billion in actual revenue in four quarters.
            • For-profit companies act on behalf of shareholder interests, not citizens.
            • Nonprofits have no fiduciary responsibility to shareholders and prioritize user anonymity and security.
            • The Foundation for a Human Internet values transparency and aims to be a trusted identity layer between users and organizations.