126 Comments
May 5, 2023·edited May 5, 2023

Eric Schmidt! I don't think so. Google has played along with this schtick from the beginning and Eric has been there in the driver's seat at least since 2000. He's played along and capitalized on section 230 since the beginning of Google.

https://en.wikipedia.org/wiki/Section_230

Section 230 is the reason that it is impossible to hold social media companies legally liable for their content.

Also, Google, with Eric Schmidt at the helm, has made a pile of money off of licensing the Android operating system which is the operating system in Samsung cell phones.

Likely, a primary reason that Eric is so concerned about AI is that open source AI poses an existential threat to Google Search (his cash cow monopoly.)

Eric never had a moral compass and I doubt that he has suddenly developed one.

Expand full comment

Thank you for taking on this formidable challenge, Jonathan. I have read all of your books and frequently recommend them.

I understand AI poses a serious threat to the information landscape, but we must vigilantly guard against authoritarian tendencies in our efforts to thwart those threats lest we inadvertently empower the state and other authorities to infringe on our inalienable human rights—much as chemotherapy indiscriminately destroys healthy cells with malignant ones alike.

Over the past three years, we have witnessed how governments have used the excuse of suppressing “misinformation” to silence dissident voices exposing their disinformation and lies—to lethal effect—as I’ve covered extensively at my Substack:

• “Letter to US Legislators: #DefundTheThoughtPolice” (https://margaretannaalice.substack.com/p/letter-to-us-legislators-defundthethoughtpolice)

• “Letter to the California Legislature” (https://margaretannaalice.substack.com/p/letter-to-the-california-legislature)

• “Dispatches from the New Normal Front: The Ministry of Truth’s War on ’Misinformation’” (https://margaretannaalice.substack.com/p/dispatches-from-the-new-normal-front)

My concern with the proposed reforms you have outlined here is they can easily be abused by totalitarian forces. #1, for example, would eliminate the protective cloak of privacy for whistleblowers and others attempting to expose corruption and other regime crimes, thus endangering the ability of individuals to share information that incriminates the powers enforcing this rule.

#2 is an excellent idea and one I support; same goes for #5.

#3 is a bit amorphous—I would need to understand more what you mean by requiring data transparency but am strongly in favor of transparency for government officials, agencies, and other public entities.

#4 worries me greatly as it could threaten the very platform this piece has been published on. I am extremely grateful to Chris Best and Hamish McKenzie for taking a strong stance in favor of free speech, despite ongoing pressures from pro-censorship advocates. The discussion provoked by this Note from Hamish is well-worth perusing for those who wish to understand the nuances of this contentious debate:

https://substack.com/profile/3567-hamish-mckenzie/note/c-15043731

As you formulate solutions to address the challenges of AI, I ask that you never lose sight of the necessity to protect our freedom of expression. As Michelle Stiles writes in “One Idea To Rule Them All: Reverse Engineering American Propaganda”:

“The greatest attack on language is censorship and this must be resisted at every level. You cannot have a free society without free speech, period. Any attempt to argue that others must be protected from offense and hurt feelings should be utterly repudiated. No government, no company, no fact-checkers can ever be the arbiters of truth.”

Expand full comment
May 5, 2023·edited May 5, 2023

"...a small number of trolls, foreign agents, and domestic jerks gain access to the megaphone that is social media, and they can do a lot of damage to trust, truth, and civility."

What does it say that 2 of the 3 examples here are purely subjective? What is a "troll" or "jerk".. other than someone you don't like? You might define these characters in one way, but that doesn't mean the next person would.

Expand full comment
May 6, 2023Liked by Zach Rausch

Great article at the Atlantic. As a software engineer with some security background I think the argument for marking AI content as such should be flipped on its head. It is much harder to get all models (many of which are open source) to mark generated pictures. That genie is out. Absolutely no way around it. What you CAN do, is the exact opposite: require phone manufacturer/ camera makers to digitally sign all pictures taken at point of capture with a smart card (like the one you have in your credit card). Much more viable and require more work to break.

Expand full comment

I really appreciate this collaboration with Eric. I'm also concerned about including Renee DiResta, given what Michael Shellenberger has recently been writing about her. Hopefully your desire to preserve liberal democracy will prevail.

Expand full comment

And what about privacy? Authenticating users? Does that mean the end of using pseudonyms? Look, personally, if I were independently wealthy and didn’t have to worry about losing my livelihood I might use my real name. But for now my employer doesn’t need to know what I think about certain things let alone my creative work. Employers are looking for any excuse to fire employees (they’re liabilities on the books, not assets). No thought police for me, thank you very much!

Expand full comment

The full essay is behind a paywall, so I apologize in advance if I’m misinterpreting your reforms based on this Substack.

You write: “we saw that social media and AI both create collective action problems and market failures that require some action from governments, at least for setting rules of the road and legal frameworks within which companies can innovate.”

The idea of “market failure” is a nonsense idea. It is based on the (false) premise that proponents of free markets promise perfect outcomes, and that any time less-than-perfect outcomes arise, it’s time to send in the troops.

Of course, free-market proponents DON’T promise this. Our claim is simply that free markets provide the mechanisms by which people may arrive at better outcomes.

What are those mechanisms? Primarily, it comes down to individual accountability: If as a business owner, you fail to give your customers what they want, you will go out of business. Etc.

This is key. Because the solution proposed by the “market failure” folks is that we get the government to come in and enforce the kind of outcome we/they want.

But think about it. The defining feature of government is that there IS no real accountability. When the FDA fails, over and over again, to do what it says it is doing – protect the public from dangerous drugs – does it go out of business? No. Maybe the higher-ups will be replaced. But the (dysfunctional) “business” will keep on doing what it’s doing. And the same goes for every single government agency.

The idea that we’re sold in our econ 101 classes, about “market failure”, and the necessity for the state to come in and regulate markets, is nonsense because it does not take into account the nature of the state. What we’re told in those classrooms is: “Look at how the market didn’t provide the outcome we want!” (Which may or may not be true, let’s say it is true.) The instructor then makes the fantastical leap in logic to assert that “this is why we need to government to regulate X,Y, or Z market.”

With no attempt to explain how or why government decision-making will result in better outcomes than the market-based, individual decision making. It’s like fairy dust: Just sprinkle it on whatever you don’t like, and it will magically transform into The Right Thing.

Except it doesn't. And we have a whole history of the regulatory state to demonstrate that it doesn’t. And the reason it doesn’t is the thing that those promoting it never even consider, or attempt to explain: The nature of the state.

The assumption that is built in to the “market failure”/need to regulate model of thinking is that the government a) is benevolent, or at least is morally neutral, and b) that it produces the outcomes it says it will produce.

Any attempt to regulate AI that is based on this childish assumption will fail just as spectacularly as we are witnessing the highly regulated medical industry fail right in front of us.

Let’s please not bang our collective heads against this same wall one more time.

Expand full comment

How does anyone survive without comedy?😲😳😁😁😁

Expand full comment

Interesting that what is missing from the proposals is the obligation for (social) media companies to provide transparency regarding the angorithms they use. A consumer cannot be expected to agree to the kind of manipulation that is taking place under the hood without being properly informed. Unfortunately, informed consent has been under extreme pressure from the powers that be. And Big Tech continues to act as if nothing’s wrong with the secret source driving their bottom line, the angorithms.

We are all being manipulated by a series of machines and their operators.

Expand full comment

"We both share a general wariness of heavy-handed government regulations when market-based solutions are available."

A potentially good way to do this is to change the incentives in the economy via taxes. Note that if one is interested in tax as a light-handed small government solution to collective action problems, one can tax without handing money to the government. One can pass on the tax as a dividend to citizens. The carbon fee and dividend is a well-known example.

Along these lines one could think about an AI fee and dividend.

Another idea should be to cut taxes on labour and increase taxes on material resources. This could at least buy as some time and slow down the impact that AI will have on the labour market. (Note that much of the danger that arises from AI and other algorithms is the speed with which they transform society, giving us little time to adapt.)

Third, network effects are likely to increase the power of corporations to make profits from rent extraction instead of from value creation. Estimating network effects to be quadratic, we should think about quadratic taxes, that is, taxes increasing quadratically with income and wealth.

These proposals also address an issue that this article unfortunately ignores. While I agree that AI and other algorithms (such as those pushing adverts on us users) pose a danger to democracy, the article does not investigate the dangers to democracy arising from increasing income and wealth inequality. All three example proposals above will work in the direction of reducing inequality.

Expand full comment

Did you and Eric Schmidt discuss requiring AI to disclose its sources, e.g. what sites and authorities were used to create the content generated by the AI? Did you also discuss royalties to the creators of that? In its current form, AI feels a bit like a plagiarism machine. Kind of like Google.

Expand full comment

I haven't read the article, but your suggestion for making platforms liable sounds like it would converge toward heavy censorship ("we have to worry about liability, so your criticism of government COVID policy has to be deleted") and your suggestion about "age of adulthood" sounds like it would keep kids trapped in schools, unable to take advantage of potential learning opportunities from AI tutors.

Expand full comment
May 7, 2023·edited May 7, 2023

This is crazy – anything with Eric Schmidt should set alarm bells ringing. Google and the other big platforms have been angling for a long time for govt regulation that would effectively hamper smaller competing platforms.

BUT repealing section 230 is even worse. It would destroy substack and given the current use of lawfare to target dissidents in the USA would completely destroy whatever free expression (and there certainly is some) is on the internet.

And authenticating all users could mean the loss of any anonymity. Someone – Big Tech or the FBI will see that no one can post anything that arouses mob anger because their identities and addresses will soon be leaked.

And finally, while there is evidence that social media use does seem to be harming young people, we do not have much evidence on what the mechanisms are. For one thing western countries like the USA have been experiencing a massive cultural revolution (sometimes described as Woke) starting around 2010, and social media (together with public education) have become enormously powerful transmitters of this cultural shift. By destroying free expression on the social media one may simply be killing the messenger, when the problem is the message. And killing free expression on the internet will probably deliver the final blow to whatever still remains of freedom and democracy in our societies. Is that what we want?

Expand full comment

This might be out of left field but it sounds to me like you’ve been writing about two problems that will solve each other. If social media is as bad for us as you say, and AI will make social media more unusable, why not let it rip, let social media be overrun by toxic fake content, so then we can finally just move past it and get our lives back into the real world?

Expand full comment

The reform ideas are interesting, but I don't think they will work (sorry for my directness) because they do not address structural issues. You can see how on Twitter verification hasn't done much, because of the infinity of nuances of what is "right" and "wrong". Algorithms have limited effect on how people think or do.

The problems is seen too much from a psychological point of view. The assumptions may be correct (from a behavioural perspective), but I am not convinced that social media is the cause of so many ills. If we do not reform educational institutions, we just do band aid patching.

I anticipated the social media problem at about the same time when I was on Facebook while I was doing research on social adoption of ideas. The explosion of negative side effects around 2012 have less to do with social media itself, but with changes within society, changes which are reflected on social media. I just think the causal order is reversed here.

The new AI is a civilisation altering event. In fact I believe we are on our way out (I explained my reason for that in a prior post and will expand on the follow up), but what happens to us could be the difference between heaven and hell. What AI does is a conversation worth having. Must have.

Thanks.

Expand full comment

One way to read what has happened the last however many decades is that building a tower of babel is a poor choice and its fall is a good thing.

Expand full comment