920x60-2

AI Risks for Preppers – Part II – ChatGPT and Friends

End of the World Listening: Operation Mindcrime

Slow-motion EOTW: South Korea fertility rate is .84

I’m putting this one at the top; it just came out today. OpenAI creates team to manage superintelligent AI (you know, Skynet) “The company said it believes superintelligence could arrive this decade. It said it would dedicate 20% of the already secured compute power to the effort” (of figuring out how to keep it from killing us)

Humanity just can’t help ourselves, can we? “The vast power of superintelligence could…lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

One thing I think is so funny (in a dark Terminator kind of way) is that even as we plow full speed ahead on AI, there is this insane belief that we will be able to ‘manage and control’ whatever entities we create (assuming we create any). Maybe for the first 15 minutes they exist; after that, we should prepare to serve our new robot overlords. Here is a company that literally says it is trying to create an entity which in their own words ‘could…lead to human extinction’…and we all think…eh, what are the odds?

Even if the worst-case never comes to pass, the Internet needs to prepare for a day when the vast majority of content is generated by AI, not humans. Sorting signal from noise will be nearly impossible. I saw this a month ago when the art sites started getting overwhelmed by AI content. Perhaps this will force us back to communicating in the real world? One can hope!

AI News and Information Links:

ChatGPT passes medical board exam

ChatGPT passes law and business exams

ChatGPT passes Bar exam, better this time (2 months later) in the 90th percentile of test takers

Schools and Colleges ban use of AI (January)

Some Schools and Colleges telling kids and teachers to capitalize on AI (May)

Something I said last post: we won’t be able to know reality online “”When anything can be faked, everything can be fake,” McGregor told CNN. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

Lawyer sanctioned for citing fake cases generated by AI (we laugh, but most of us are going to fall for AI generated content at some point or another)

AI better at diagnosing your medical condition better than a general practitioner

More of that

AI provides more compassionate care

AI Debates Expert Debater (IBM’s Project Debater)

Old Article (2017), but Important: AI creates a better AI to do a task, outperforms similarly created Human software (this is what I consider one of the core concerns about AI…the ability to replicate easily with a focus on new instruction sets)

Large Language Model AI’s sometimes make stuff up (Bonus creepiness: “The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,”; I can’t wait for my Emo daughters to start conversing with Bing)

Coders trick ChatGPT into creating sophisticated Malware (This is one part of getting us to a properly Cyberpunk dystopia)

Novice creates end-to-end computer virus using ChatGPT (Deckers, unite!)

AI is going to destroy the dopamine response among humans (IE. 90% of porn will be generated by AI within 5 years, and you will have access to personalized videos that can be altered on the fly) (The Screwfly Solution)

General AI thoughts (and the 10-20% chance of humanity surviving cite)

3900 Tech Jobs lost in May to AI; Hiring Going Forward to Focus on those with AI Skills – I’ve been telling every teen and young person I know to get interested in AI if they want to have a real career path. Either that, or learn a trade, any of them, which are all in desperate need of employees.

Restraining AI with AI (This suggests our only hope lies in giving AI the same competitive natures that caused all the wars and blood in history. This article does touch on something else important; regulation won’t work because it works at a humans/snails pace compared to algorithmically improving AI. The big challenge I see to this article’s approach is the same we have if Aliens have ever truly visited earth…one generation of AI (or one single AI) would likely get light-years ahead in terms of ‘power’ (and it could happen in time frames we can’t even monitor), similar to the big bad in the Boys, to where no one and nothing could hold it accountable)

Chat-Powered Toys – And you thought Teddy Ruxpin was creepy.

China has AI goals too – one thing we know for certain is that even if the western world comes up with a smart regulatory regime (unlikely, IMO), our enemies won’t feel bound to do the same.

Military Examining AI Use – What could go wrong?

Hollywood worried about AI (Boo hoo?) – Most think AI will hurt entertainment quality….HAHAHAHAHAHAHAHAHA – I’d say 1 in 100 shows coming out of Hollywood is worth watching

AI Robots as Caregivers – This seems pretty useful!

______________________________________________________________________________________________________________

One trend I’ve noticed while reading about emerging AI risks/rewards is the tendency to be in the Doomsday-is-Around-The-Corner camp (though not so straightforwardly put, also here) or the ‘this can’t possibly go wrong camp.’ One erroneous line of thinking by the 2nd group goes like this ‘hey, you were wrong about X terrible scenario (One example here) so AI is safe and full of sunshine and rainbows.’

I think discernment is going to be important going forward…most of the media simply can’t be trusted on AI reporting, as they don’t have any reasonable expertise on the subject. (As usual, they’ll put a lot of words on paper though.) I’ll be the first to admit, neither do I, and so I take everything I read with a grain of salt, and advise you do too also (hence all the links so you can catch up yourself!). For example, the folks at Less Wrong seem both knowledgeable and sane, and seem to cover a broad range of AI subjects. However, this article sounds sane, and still puts ‘humanity’s chance of surviving the next 50 years at 10-20%.’ I wonder if I am so conditioned, even subconsciously, that I want to believe that’s a crazy thing to say, but in the totality of my research, I’m not so sure. (And yes, I’ve implied the same on occasion, but I know I’m crazy, and much of what I say is tongue-in-cheek)

I think my increasing cynicism (as compared to my earlier writings on preparedness) has to do with ‘guardrails’. I used to feel that between the media, the government, and benevolent corporations, while not always worthy of ‘trust’ exactly, would have their interests aligned with the people enough to force them to protect us whether they really felt like it or not. I no longer believe that’s the case. I do believe we are at the beginning of our Cyberpunk moment, and we will look back and say, this is when it all started.

How can I say that? It looks very likely that 2 governments (ours and China’s) are responsible for 7 million COVID deaths, and there is not a peep about folks being fired or responsibility being accepted for that. Folks, that’s a death toll greater than all but a dozen or so conflicts in history. Are we so inured to numbers that large we don’t even register it anymore? The combined regime of government, media and social media companies are responsible for trillions in economic damage, millions of lost businesses and jobs, and untold damage to the social fabric. And yet we move on with our lives in a haze of, ‘whatevs, let’s just get on with it’. I know, I know, what else are we supposed to do?

So as I mentioned last time when talking about reasons why I prep, I am more concerned with the entities created to protect us being the bad actors in nearly any scenario, including an AI-doomsday one. In that case, who is left to provide the guardrails? A media that doesn’t understand it? A science corps that relies on government largess for their livelihoods? Just like there was a little button on the CDC’s desk that said ‘I think experimenting with making a virus more potent and more transmissible is a good idea…such a good idea that while we can’t get away with it in America, we’ll just send some cash to our communist friends in Asia, and they’ll help us out. What can go wrong?’ These are the people we’ve put in charge. It is such a ridiculous combination of incompetence and malevolence, that I have to blame our social-media soma-endorphin-haze that we haven’t marched on DC with torches and pitchforks.

Even (and especially) when the government means well, very few want to call out the negligence and failure of the powers that be, for fear that we’d claw back some of the untold responsibilities we’ve given it. Take the original Food Pyramid. Now, for those of us who read Gary Taubes life-changing ‘Why We Get Fat‘, we see immediately what is wrong with this government created and approved guide to living our lives. The entire bottom of the pyramid are the foods we should be eating the least of: pure carbs. And the fats and oils we should be eating more of? At the top. And yet this is what we were told to eat to be healthy, going back to bad advice given by bad science as early as the 1920’s. (I seriously can’t recommend Gary’s book enough; I have bought and given away many copies, that’s how important it is) How many people have died due to Diabetes, Heart Disease and the occasional correlation to any number of other ‘western’ diseases because of it? If we put together a list of deaths caused by advice, experiments, and ‘good intentions’ by government, I wonder how many people it would be? 8 figures surely.

The reason I keep going down this particular rabbit hole and relating it to prepping is that there simply aren’t enough bodies looking out for threats against the populace. Combine that with the many actors worldwide who are actively pursuing the worst AI can do, and you don’t just have 1 potential black-swan scenario, you have hundreds or thousands.

Similar to the question above, do you really believe there isn’t some bureaucrat somewhere asking ChatGPT, ‘How do I make a worse COVID?’ or ‘How do I use AI to perpetuate my own power?’

If you have any good articles on AI you’ve read, good or bad or examples of either, send it my way and we’ll add it to this list if needed!

Food, water, fire, shelter, light. In triplicate if possible. Do a little every day to make yourself and your household more resilient. Once you have that down, think about what rebuilding an agrarian society looks like.

Love y’all, peace!

About The Author

admin

Other posts by

Author his web sitehttp://www.adviceandbeans.com

06

07 2023

Your Comment