Issues

What is a Human Internet?

  • Our vision for an online community is modeled after the principle that defines democracies: one human, one voice. One user, one equal voice online. We believe in the potential for an internet that serves society—centered around human rights and free from the influence of bots and fake accounts. This is a human internet. Restoring human voices with real privacy is the primary mission of the Foundation for a Human Internet. We believe individuals should have control of their data and have a right to safeguard it from abuse and surveillance. By making the internet a more democratic space, we can help it reach its full potential for communication, sharing of information and healthy debate. 

  • There is a dire need to act now for a human internet. Fake news, surveillance states, and oppression of free speech continue to jeopardize democratic institutions and human lives. Big tech monopolization limits human power and enables polarizing misinformation. When fake accounts, coordinated bot networks, and misinformation dominate, a human internet is unattainable. We must ensure that humans—not malignant bots, wealthy elites, governments, or corporations—are in charge. Together we can create an internet that works for us

Global Erosion of Democratic Values

  • We live in an internet age that enables disinformation and propaganda on a massive scale. Governments and corporations have unprecedented surveillance capabilities—the extent of which is still not fully understood. These realities pose an existential threat to democracies worldwide. The Foundation for a Human Internet targets this global threat as its primary mission. With the right foundation and infrastructure, the internet could improve access to truthful, reliable information that is essential for democracy. However, if the current trajectory continues, democracies will remain vulnerable to authoritarians wielding the internet as their weapon.

     

  • Anyone with power or wealth and access to a small team of software engineers is capable of manipulating social media to undermine democracy by purchasing and automating thousands of fake accounts that spread false information. This can effectively polarize and mislead real voters. For example, in India, a secretive app known as Tek Fog was created to beat reCaptcha systems. Tek Fog was “used by political operatives affiliated with the ruling party to artificially inflate the popularity of the party, harass its critics and manipulate public perceptions at scale across major social media platforms.” Tek Fog is not an isolated case. Their secretive practices are similar to those of many shadowy groups. Through these activities, they are dismantling a core principle of democracy: every person has a voice regardless of their social status, wealth, or power. A voter can only vote once at the ballot box. But online, one bad actor can manage millions of fake accounts.

  • This is a calculated suppression of free speechan autocratic construct that directly challenges equality and attacks sovereignty in democratic nations. Social norms, public sentiment, and human values are being controlled by people with power. Through the use of coordinated inauthentic behavior (CIB), a few voices can have disproportionate influence over discourse. Democracy works when voices exist on a level playing field. When governments, partisan actors, and radical individuals can artificially amplify their voices through bot networks, it undermines fundamental democratic principles. 

Digital Authoritarianism

  • Authoritarian regimes have the ability and resources to release sophisticated bot network campaigns to further their agenda, consolidate power, and drown out dissenting voices under the guise of “national security.” Digital Authoritarianism is a term that describes the various tactics deployed by dictatorships and autocracies to control the flow of information. Technological advancement has given state regimes tools to efficiently and effectively surveil their citizens and create state-favored bias in public discourse. Real opinions are muted while opinions from bots are amplified. For example, some authocratic leaders commit genocide to consolidate power, while convincing their unwitting citizens it was done to fight terrorism. Those standing up for human rights are subjected to imprisonment, torture, and execution; even their families often face retribution.
  • We need platforms for anonymous social dialogue to combat digital authoritarianism. If people are unable to connect to express disagreement outside the scope of state surveillance, we will continue to see “ever-expanding state control and ever-shrinking individual liberty.” Places such as Myanmar, Hong Kong, Saudi Arabia, North Korea, and Russia may be foreboding our online futures. Their practices should concern everyone, not just citizens of those states. The more digital power autocratic states develop, the better equipped they become to exert their influence internationally. 
  • The Foundation for a Human Internet strives far beyond the dissolution of bot networks being utilized by these oppressive regimes. We prioritize and protect users’ privacy so they can feel safe standing up to human rights abuses. People should be able to organize and discuss these issues outside of state control measures. In keeping online and offline identities distinct, we can help prevent suppression of dissent in online forums—and keep these online spaces safe for organization and discussion.

Dangers of Disinformation

  • The internet has damaged the integrity of information. Computational propaganda and the use of automated accounts to manufacture credibility is erupting into a global crisis. The lack of accountability online is fanning the flames, and creating a toxic environment on social media platforms. Currently, there is virtually no way to prevent duplicate accounts. Trolls on message boards can return with new usernames as quickly as people catch on to their trolling. Those operating fake news sites can host a new website as fast as people catch on to their disinformation. Automated account networks can easily amplify selected messages on Twitter and Facebook through sheer force. As a result, people cannot agree on basic facts nor which content on the internet to trust. Widening polarization, populations divided over different sets of facts, and an inability to find a middle ground makes solving societal problems nearly impossible. Help us [link to our donate page on ‘help us’] to create a human internet that is productive for solving global issues and provides a place for global citizens to debate freely.   
  • The prevalence of disinformation has consequences beyond creating division in society—it  also threatens journalism. Fake and misleading news sites often beat out honest journalists, because a scandalous lie generates more traffic than a boring truth. Fact checking adds additional financial burden onto news sites, and is often a luxury. Large newspapers like New York Times or The Washington Post can shrug off the costs, while smaller media companies are forced to reduce their number of media channels, journalists, and truth seekers. Good journalism can feel unrewarding while quick sensasional headlines generate clicks—even if they cannot be credibly verified. Meanwhile, the comment sections on news articles, which should be rich sources of debate and discussion, are overrun by trolls and spam. FHI seeks to reduce these trolls and spam, while promoting quality journalism that derives information from verifiable sources. 
  • With artificial intelligence becoming smarter, it is becoming progressively harder to identify fake accounts and fake news. This means we are not able to simply urge people to do their research or be careful about what they share. There is often an implication that only uniformed people fall for ‘fake news’. This causes people to believe they are too intelligent to be misled, which, ironically, makes them more susceptible to deception. Anyone can be affected by disinformation, and we must all be vigilant. We also all develop cognitive biases that  are very difficult to undo. Being aware of the danger and our shared flaws, we can begin to curb misinformation.

     

  •  Current content moderation approaches will become increasingly hopeless in the future.  This is why we need to find more systematic and scalable solutions. With international connection comes international problems; disinformation must be seen as an international crisis and must be challenged through unified international solutions. Rather than trying to solve problems created by the current internet as they arise, let us change them at the core—by creating a modern human internet. 

Content Moderation is Not the Solution

  • The Foundation for a Human Internet does not condone racism, intolerance, hate speech, science denial, or spreading of misinformation. We do not believe that human voices should be censored. Instead of censorship, we promote a different, two-part solution. We must first achieve a human internet with every person having one voice. Only then can we establish the second step, a merit-based infrastructure in online social dialogue, where the opinions and arguments of humans are vigorously debated, disputed, and judged by other real people. New, bot-resistant social networks will be able to create such accountable communities, managed by algorithms that mimic the real world and prioritize human well-being, without maximizing polarization. This approach ensures everyone is afforded equal rights to free speech and gives ideologies and belief systems comparable prominence to that of the real world. When we know ideologies harmful to society are gaining prominence—and what people with these ideologies are saying—we are better equipped to stand up against them in both online forums and in real life.
  •  Currently, tech giants are active censors under the guise of “moderating content,” while governments and bad actors manipulate media and innovate propaganda tactics. The line between fact and fiction is further blurred while discussions are vitriolic and unproductive. Calls for Facebook and other social media companies to censor accounts is a symptom of our broken internet. Censorship and content moderation often mean the same thing, and it is always a  temporary solution that is ineffective long-term for fighting bot networks and disinformation. As for-profit tech companies censor more content on their sites, they are seizing the veiled opportunity to exploit power and threaten free speech. Authoritarian governments also see an opportunity, using the examples set elsewhere to justify removing opinions that they deem inconvenient or unfavorable.
  • Censorship is the most common form of content moderation, but fact-checking and flagging harmful misinformation by the networks themselves—and not by independent, trustworthy actors—are also problematic. These solutions frequently feel like the obvious, and morally righteous options in the short term, but they ignore long-term problems.  No matter the type of moderation, it is almost never a good solution; all methods present difficulties in implementation and restrictions on speech. Moderation is expensive, creates distrust in online communities, and gives tech companies, which ultimately chase profits, a disproportionate amount of power. In the arms race to circumvent content moderators, AI-driven bots will increasingly be a step ahead, and find the gray area of “just barely legal”. For instance, Facebook has demonstrated that they are unable to successfully censor the flow of information. Furthermore, content moderation policies are designed to give the platforms broad discretion, which can translate into inequitable enforcement practices. Removing fake accounts often goes against the financial interests of the company, as it generates revenue from advertisements based on the number of users. Unsurprisingly, some studies show shocking numbers of fake users, such as 30-50% of Facebook users being fake. Deleting inauthentic users would cause their revenue to plummet, so their only incentive to deal with this problem is to ameliorate bad press and legal disputes.

Privacy is a Human Right

  • In the real world, we have a multiplicity of identities that we express in different communities. People speak their minds because they can know and manage exactly who is listening. The government cannot listen and imprison you. Your job cannot listen and fire you. Your entire community cannot indiscriminately listen and cancel you. Conversations at work have different rules and expectations than do conversations between friends.

  • Within these communities, we are held accountable, but rarely between them. The blurring of these lines in the real world are often signs of an autocratic government. For example, in the Soviet Union, neighbors and acquaintances were often incentivized to report anti-government sentiments. It created paranoia and distrust; even the threat of losing privacy in casual conversations led to self-censorship and fear.  The Foundation for a Human Internet envisions an online future that abides by the real world democratic understanding of privacy. We follow privacy by design which is the concept of embedding privacy into any product or app from concept through its development. We all have different identities in different contexts, and our right to privacy is often our right to maintain those different identities. Privacy is paramount to a functioning democratic society.
  • The United Nations Office of the High Commissioner on Human Rights defines privacy as something that “accords us the ability to control who knows what about us and who has access to us, and thereby allows us to vary our behavior with different people so that we may maintain and control our various social relationships.” This definition offers insights into why Privacy International stated that privacy and free speech are “two sides of the same coin.” Without the assurance that your personal identity will remain private on the internet, one might feel reasonably uncomfortable stating their opinions, fearing social, professional, or life-threatening consequences. For this reason, a non-private platform can never hope for the same level of honesty in public discourse as one that enshrines privacy of the user as a core value. 
  •  If we cherish the nuances of privacy that we experience in real life, if we wish to obtain an internet where these nuances can thrive, we must think very carefully about who we entrust with this responsibility. We cannot truly trust our government when they say they are giving us privacy, and for-profit companies will always focus on shareholder value over privacy protection.

Accountability with Anonymity

  • Online, accountability might seem at odds with privacy and anonymity. But we believe anonymity and accountability should coexist—users freely express their opinions without fear of retaliation, and platforms have the tools to uphold their community rules. The balance and combination of these will foster a healthier internet. For instance, at a cafe, likely no one is misbehaving and anyone not abiding by the cafe’s rules will be asked to leave. Someone cannot return with a different identity to continue their mischief. We are working to bring this simple, yet effective, real world  accountability to the internet.
  • We believe in by-site anonymity that reflects the privacy you have in the real world; the ability to be someone different depending on the situation. For example, amongst friends, an individual may share details about their gender or sexual identity that they choose to keep secret from their family, employer, or government. In cases such as these anonymity is not only convenient but vital. Similarly, by keeping identities separate online, we encourage privacy and promote online spaces where people can comfortably speak their minds about the most private and sensitive topics. 
  • The right to remain anonymous is a fundamental and irreplaceable component of our right to free speech. “Anonymity is a shield from the tyranny of the majority,” writes Justice Stevens of the U.S Supreme Court in his opinion for McIntyre v. Ohio. In this case, the Court determined that the First Amendment protects an author’s decision to remain anonymous. This right to anonymity has been under attack in the online world because people erroneously conflate it with other online dangers such as cyberbullying, defamation, and hateful content. Often, the benefits of anonymity are overlooked, such as the fact that anonymous platforms democratize discourse and create space for honest conversation where users can express their opinion without fear of unfair consequences. Additionally since anonymity empowers everyone equally to express themselves, it leads to the inclusion and support of otherwise marginalized voices. In this model, each online user has one voice, no more or less important than another.
  • Online problems which are often attributed to anonymity, such as bullying, are a result of a lack of online accountability. With a proper toolkit, communities can set and enforce their own rules within the limits of their community without recording and tracking the identities across platforms. We can increase accountability by stopping inauthentic behavior online and simultaneously empowering platforms to uphold community rules. Reducing anonymity is not the solution. Denying users the right to anonymity will chip away at their fundamental right to free speech, without achieving the desired accountability.

  • Guaranteeing anonymity online, and rejecting government and corporate interests lobbying for the unlimited collection of our data is the only way to ensure that our online privacy is protected. Voting, and political polling are examples of anonymous democratic participation which gain meaning from the number of active participants, not from the identities of those participating. Protests are also an important anonymous way for individuals to express their beliefs, although more recently, digital surveillance threatens the anonymity of disguised dissenters.

The True Costs of Data Breaches: How Secure is Your Data?

  • Data leaks can happen to anyone. No organization can guarantee they can keep your data safe. Even the CIA has been hacked, posing a serious risk to U.S national security. The only way for a company or organization to protect data is to not collect personal identifiable information (name, email, phone number, address etc.) in the first place. By not collecting data,  one can completely mitigate the dangers associated with data breaches. The majority of consumer businesses that monetize through data do not do it by selling personally identifiable information. They sell other characteristics of user behavior or preferences on the platform. If you own a website, you might actually get users to express those characteristics more and give you more monetizable data if you do not have that personal identifier linked to user activity. 
  • Data leaks can have serious consequences for communities that discuss sensitive topics such as mental health, political activism, sexual abuse, systemic racism, gender or sexuality. Imagine the fear an individual identifying as LGBTQ+ in a country where homosexuality is a crime. Unless there is a space and an infrastructure for anonymous discussion where  users can trust that their data is safe in those spaces, people in marginalized communities cannot share or gain access to vital information. Websites and services connecting people within these communities will see less engagement if they cannot protect their users through guaranteed anonymity. This is not only about protecting vulnerable groups, it is about protecting all humans. The world benefits when the value of humans takes precedence over the price tag on our personal information.

  • In addition to data leaks and hacks, there is a growing fear that governments can compel companies to share data. In the U.S, it has led to high profile court battles, in which the courts have tried to strike a balance between national security and privacy. In the supreme court ruling Carpenter v. United States, the court decided that the government needs a warrant to compel companies to access cell site location information from a cell phone company. Yet other disputes, such as ones between the FBI’s and Apple surrounding iPhones, have demonstrated that user privacy does not always win. The simplest way to prevent both data theft and being forced to heed to demands from governments goes back to having no data to steal or share in the first place.

A Nonprofit Solution: Trusted Identity Layer

  • The Foundation for a Human Internet is nonprofit, because for-profit corporations have incentive to exploit data for profits. The same practices that polarize society, spread disinformation, and exploit user data also generate enormous profit. As a result, we are left highly vulnerable both as individuals and as a society while big tech consolidates and generates record profits, and finds new ways to limit our privacy options. Anyone with a Facebook account knows that even when they customize their privacy settings, Facebook freely hands their information over to governments and corporations. TikTok follows a similar logic, resting comfortably in the pocket of the Chinese Government, giving them access to increasingly invasive data collected by the platform about people around the world. Zoom discretely decided to change their definition of encryption so they could reap higher profits from personal information. Users are left with a difficult decision, to give monopolistic tech companies their private information, or grapple with an inability to participate in social dialogue. 
  • The data brokerage market—corporations collecting and selling massive banks of personal information—is projected to be around US$345 Billion in 2026. In four 2021 quarter reports, Facebook claimed US$112.39 Billion in ‘actual revenue’. Alphabet Inc. (Google) earned over US$53 Billion in just one quarter. Companies want to maintain the internet status quo, because it is extraordinarily lucrative for them. Any deviations from the current norm are designed in a way to increase profits, with little regard for users and their privacy. These practices are unsurprising given that for-profit companies are incentivised to  act on behalf of shareholder interests, not citizens.  
  •  A nonprofit has no incentive to go back on their word; its primary mission is to protect anonymity and security of internet users. As a nonprofit, we do not have a fiduciary responsibility to any shareholders that could ever supersede our responsibility to our users, and instead have increased transparency requirements. Everything we do is also open source, because we value transparency. We want to be held to the highest standards of accountability. Our goal is to be a trusted identity layer between the user or consumer and the organization or company.