The Exigent Duality
Take a Stand - 07:30 CST, 6/04/24 (Sniper)
This thread got me thinking: what if in the future the entire "justice" system becomes "AI"-based? You'll get an automated court summons for some crime you won't even realize you committed but which the "AI" picked up via data mining-- sort of like these DMCA takedown requests. Then you'll show up and the prosecutor will be an "AI" subroutine, as will the judge-- you'll get convicted and sentenced by "AI", then taken away to prison by those robot dogs with guns mounted on them, which already exist. Want to actually talk to someone? That will also be "AI"-- some kind of unhelpful legal chatbot of some sorts.

This will all be justified as making things "more efficient". The whole experience will be like an India call center, but a million times worse since it will ruin your entire life by putting you behind bars.

Speaking of "AI", this article is interesting. In it, Elon Musk says that his company's purpose is "is to understand the true nature of the universe". I should borrow him a copy of "Theology and Sanity"-- "read this, the nature of the universe is right in here." Also from the article:

"When asked during the conference about the impact of AI on humans, he said, 'In a benign scenario, probably none of us will have a job. There would be universal high income, not universal basic income [but] universal high income. There would be no shortage of goods and services.'

However, his next words were thought-provoking: 'If the computer and robot can do everything better than you, what meaning does your life have? ... In a negative scenario ... we're in deep trouble.'”


Sure, it may be "high income", but what about inflation? During the Scamdemic they helicoptered a relatively tiny amount of money, Keynesian-style, to the general public, and look at what happened to prices. Can you imagine if they annually started printing and helicoptering trillions of dollars? You'd get that government-issued check for a million dollars a month, and be living in a dirt-floored hovel.

The article quotes another fellow who agrees with Musk's "negative scenario"-- from the article:

"Japanese electronic engineer Satoru Ogino shares these concerns. He told The Epoch Times, 'Having high incomes without needing to work is merely a utopian fantasy. Only through hard work do things gain true meaning, and only then will people cherish everything. If everything becomes readily available, people will lose their sense of happiness and the meaning of existence, leading to greater crises and problems.'

The rapid development of AI makes Mr. Ogino uneasy. He said, 'Whether AI development is positive or negative, it will bring disaster to humanity. In a negative scenario, the continuation of human survival will face enormous challenges. In a positive scenario, humans will become overly dependent on AI for all decisions, cease to labor and think, and eventually become puppets manipulated by AI, altering their thoughts and behaviors.'"


That is a man with wisdom, who understands human nature.

Meanwhile, this unfolding, slow-motion "AI" disaster reminds me a lot of the Scamdemic. I have enormous faults, trust me, but the one thing I did get right was one hundred percent not going along with the clot shots or the face diapers: I was civilly disobedient, and if a critical mass of people would have followed my lead, the human suffering from that period would have been fractionally what it was.

"AI" is the same way. Rather than say "Oh gee, my company needs to use this to 'stay competitive'", or "I need to write software which uses 'AI' because that's what the cool kids are doing", people need to stand up and say "this is wrong, and I refuse to go along with it." Unfortunately, I'd say the odds of that are basically nil, and people won't respond in the way I suggest until immense damage has been done.