Companies have significant influence over public discourse in online platforms, necessitating that the algorithms that shape these online platforms should be regulated and constrained to sufficiently consider the public interest (Susskind, 2018: 350). Perhaps the easiest way of returning control of a public good to the people would be nationalisation of large AI companies and platforms. However, this also affords the government considerable power, to tailor public discourse to their interests (Susskind, 2018: 350). As such, nationalisation may lead to the aforementioned “digital gerrymandering” (Zittrain, 2014).
In lieu of these issues, some have suggested that companies be treated like public utilities and regulated in such a way as to protect the public interest (Cohen, 2016: 378 – 382; Rahman, 2018). In this way, many have argued in favour of ‘net neutrality’ – that companies for example cannot change the speed of internet connectivity, or limit access to content against their interests (Lanier, 2014). There are however some important differences between public utilities and technology companies. Utility companies are regulated because they control a public good (electricity etc), and whilst technology companies do this too, they uniquely have political power over who sees what. As such, to compare them to utilities seems insufficient (Susskind, 2018: 157). Moreover, companies can be just as unjust and corrupt as governments, perhaps even more so, as such, why should we trust them to run these utilities over the government (Pasquale, 2017).
Moreover, since political power is associated with the ownership of online forums (and the algorithms that govern them), we might want to restrict how much of a forum (or how many forums) can be owned by one individual or company. In essence, by doing this we accept that some platforms will advantage some ideologies, but in allowing for no one company to monopolise this, we could adequately protect deliberation of all sorts (Susskind, 2018: 358). This form of structural regulation however may actually further stand to polarise political discourse online, with each platform becoming its own echo chamber.
If traditional regulation cannot provide all the answers then, the notion of transparent algorithms may be a useful element in a package of regulation. If all data usage and algorithms are clearly displayed in a manner which is understandable to the layman, we could greater trust these to govern our public platforms (Granka, 2010: 365; Manheim and Kaplan, 2019: 170; Susskind, 2018: 354). However, tech companies of course have private interests and may not be keen to display their information to their competitors. X (formerly Twitter) however have done this to a certain extent by open-sourcing their recommendation algorithm for example (Twitter, 2023a; Twitter 2023b). However, this move has been criticised due to mistakes, incomplete code, and underlying code being completely missing (Vaughan-Nichols, 2023). Open-source models will be further discussed in section 3.2.3.
One final notion to explore is the idea that technology itself can be democratised to mitigate some of the risks. Whilst it is possible to further limit the power of private firms to control deliberation, we could also incorporate democracy itself into technology (Susskind: 2018: 359). For example, we can imagine free platforms where anyone can modify code, or methods of scrutiny that could be applied to the creators of public forums before they are made available to the public.