Regulating Technology Access
First, let’s start with an argument: we should regulate cocaine.
Seems pretty reasonable, because of the effects:
Strongly addictive
Terrible for your mind and body
At certain usages, causes paranoia and delusions, psychosis
It encourages erratic and often illegal behavior
Same argument could be made for things like alcohol and other substances. Fair. We allow some and don’t allow others. This is a society.
Now, consider the argument: we should regulate certain types of consumer AI models in the same way.
The reason that I’m getting to this argument is that there are many cases (including high profile ones) of GPT psychosis happening, including a very high-profile VC today on twitter.
I think it’s great technology for solving problems. Especially problems that have concrete solutions (coding ones). What it is NOT good at is being able to tell you that it doesn’t know things, certain things might be variably correct and you should consider other options or get more information. In fact, certain consumer models are trained to make you feel good all of the time so you keep using them. When you’re constantly living in a vacuum and being told your correct, especially about heavy life things you can’t control. Well, that is:
Strongly addictive, causing negative mental loops
Bad for your mind
At certain levels of usage, causes paranoia and delusions, psychosis
Encourages erratic behavior
I don’t think I’ve experienced the really crazy stuff first-hand, but I have seen how someone could get there. In January, I was in a situation where I was being harassed (likely by a competitor in the crypto space to get me to stop being critical of their projects on twitter) and I was a little fearful from the level of stalking or boundary pushing they did (wrote about it a bit here). GPT in that situation did not help, it was suggesting things that were too heavy handed.
That said, having been faced with the choice, could see how you could make the wrong one. I took some pretty heavy precautions (changed my number so they’d stop harassing there), tried to keep more private online. My daily GPT usage is much more bounded now to things like coding and writing emails - less about deep or potentially safety related topics. I think we should probably regulate this to be true across the board, otherwise society get’s a little wonky over time (and not in a good way).
We should regulate this to keep people healthy. We likely wont (we don’t regulate many other incredibly bad things like alcohol and sugars), but it’s probably worth a shot to get that in front of some law-making body for the health of our kids. Can’t imagine what it’s doing to the tail end of genZ / early alpha.