SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : 2026 TeoTwawKi ... 2032 Darkest Interregnum
GLD 244.98-0.2%4:00 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Cogito Ergo Sum who wrote (196552)2/21/2023 5:41:15 PM
From: sense  Read Replies (1) of 208268
 
AI is our downfall... but we will not as a group realise until too late...

[Youtube links in mine below... as links only... to make them sensible in context of my own use of them...]

It certainly does contain that potential... mostly what we're talking about... even if not there yet. This article from ZeroHedge isn't the "hair on fire" sort you're seeing dominate just now... but, its closer to the mark than most... What ChatGPT And DeepMind Tell Us About AI


AI is still... as all technology... "a thing" the utility and risks of which depend on our choices. And, that shifts the focus [as I have noted of Second Amendment arguments as "guns don't kill people, people kill people] to require asking... "what are our choices likely to be... given what we know of it... and ourselves".
So... don't get me wrong... I fully understand your pessimism, and the reasons for it, even if displaced a bit... but, don't miss that the origin in the pessimism is not the technology... but that "people kill people"... and the rest in "what we know of ourselves". It seems inevitable that AI is going to force change in that... some of which may result from introspection... and some... not so much.

But, given the origins... don't expect to see much right now... suggesting society will be taking an approach to those potential risks... like that approach I noted in relation to the American Chestnut and GMO tech ?

It's reasonable to be pessimistic... even as, having just been through what we have... "some of us"' have managed to learn exactly nothing from the experience... or, worse... another from Zero Hedge making that clear: Biden Admin Negotiates Deal To Give WHO Authority Over US Pandemic Policies






I've been posting on that concept for the last week... that the history of human behavior, seen as the continuum between "the quality of our choices"... and "Great Moments in Unintended Consequences"... isn't exactly a thing worth bragging on ? And, AI or not, we could do better... make better decisions... if we were willing to be honest about the fact we make bad ones for some very obvious reasons. But, we're not... and that's the history of politics in the last two centuries, and far longer... as those with power are unwilling to be honest, and usually feel no real need to be... unless and until on the threshold of revolution [China Covid policy].... and then, still maybe not [January 6 / election fraud exposed] ? So, I'll agree, that if we are to survive the next decade or two... it will be because we find ways, without or without AI mediating our functions in analysis, to fix that problem... as requisite precedent to enable us to APPLY CORRECTIONS... That, as it has been, REMAINS far and away our greatest existential risk... that we empower "people" [Soros, Gates] and "systems" [all governments now] with the ability to impose insane existential risks on us... while having no effective means to force them to stop doing so... while, of course, the dishonest in power... are unlikely to be keen on enabling "change" in spite of how obvious the need. It is already apparent that this new "thing"... has great potential to amplify our own potential... in new ways... which I have addressed as showing this new thing everyone is excited about today... as offering us a new "magic mirror"...

But, what the "magic mirror" shows us of humanity... and particularly of human decision making... is horrific.

AI has some spectacular potential that even people with background in the "tech" industry (but, not AI) are surprised by (Youtube) I tried using AI. It scared me.



Note the fear in the link above... less about insane killer robots taking over the world... more about the usual in "disruption" impacts... "surprise"... that its already as good at routine human tasks as it is... A perhaps under-discussed feature of ChatGPT... as being "as good as it is"... misses the point in... how quickly will it be expected to get better (or, worse... depending on perspective)

GPT-4 LEAKED: How Google's NEW AI Will Crush OpenAI & ChatGPT In 3.. 2.. 1...

This one, linked below... does a better job of pointing out some of what is scary... even if making the mistake, still, about how much of that isn't "the technology doing it", but far more about the "magic mirror"... and, perhaps, "early days" in devising functional guard rails ? Clearly makes the point, though, without much mention of it... about 'the guardrails" being avoidable with the right "trick"... still meaning that there is someone out there able to use it without the guardrails being designed for the consumer product ? And, yeah, some of THAT really IS scary... even while that being presented for us... is ALSO being paired with a lack of awareness of my point... that having a "magic mirror"... means it is not doing what you ask of it in a vaccum... but is showing you reflections of yourself (or, of all of us) in what you ask of it... The point is... the people using it are likely to try to do evil things with it... if you let them... as the guardrails won't apply to everyone.

Testing the limits of ChatGPT and discovering a dark side

People do tend to react to new technology in the way you are... "its the end of the world as we know it"... that being the subject of this board already, without having the focus being on AI functions unexpectedly providing the accelerant. TEOTWAWKI... being most often correct with the emphasis on the "as we know it" part... ie., "change"... as also occurred with the invention of the printing press, the telephone, the internet... etc., etc.

Some of that is... having horseless carriages running horses off the road... IS disruptive. Which doesn't mean either that you can clearly see... from the late 1800's... HOW disruptive it will become... or what new paths that disruption itself will steer humanity down in the very near future. Did anyone ever forecast that automobiles... would force our consumption of limestone higher, as massive new concrete highways would be needed to span the continent? Hmmm. If you did see it... when did it become useful to act on ?

Its always important to understand the potential... and the risks... along with the other "mundane" stuff that tends to accompany change... and seek to optimize the potential and minimize the risks... as, say, the impact of social media on suicide rates among young teen girls... And, THAT, particularly... as a new tech like Chat GPT might well be... not itself responsible for... but enabling and exploitive in new ways that amplify problems like that which we already know about... to deliver dysfunction in social media on a whole new level... IF the evil that PEOPLE already do to each other on social media... is allowed to be amplified by an innovation.

We already have a fairly decent sized reservoir of "man made existential risks" that we've been dealing with... or not dealing with... over the last seventy years. Nuclear weapons clearly are one of those risks that, following their introduction as a means of winning and ending wars, has... thus far... had us "step back from the edge"... every time we've been pushed towards it a bit too close, or a bit too fast... for comfort. But, while we've "been careful"... its not the case that we've been successful in spite of recognizing the benefits of nuclear non-proliferation in reducing the "human driven global systemic existential risk"... and "the limits" then become a subject of games... as in North Korea, Iran... in the approach to limits.

Then, my concern, expressed here often enough... not just on how deplorable decision making is generally, but more specifically on how deplorable the current crop of "global leaders" are... as we're clearly, on both sides, digging in now to "soon" ramp up the intensity of the early opening days in WW III... with "nuclear weapons" unable to be avoided as a part of that persistent set of risks... including most obviously, precisely in the utility of the Chinese balloon incident... involving China in seeking to provide timely strategic targeting information necessary to enable "someone" in such use...

My own expectations are altered by that event. I used to think it might still be useful to try to avoid war with China by building on diplomatic efforts... When the guy you're talking to about that... walks up to you, and tapes a target on your chest... ? Avoiding war can't occur without good faith... and there is none apparent on the Chinese side... leaving only an newly unconstrained set of considerations about HOW to conduct that war that China is insisting on having...

There is also likely to be an parallel acceleration in a whole lot of other "sudden awareness" occurring now... rightly generating concerns about those things where "the change that's coming"... is actually not a future function of change coming... as that change has NOT been lagging behind what ChatGPT is introducing people to as awareness in a first consumer / public interface... I expect "what exists" has us already far advanced, beyond "just" that point of "first awareness"... That still "future focused" concern, as another link posted here yesterday highlighted... in the "killer robot" problem.

So, push that "concern for the future in the years ahead"... with the expectation that you've just not been told, yet, about where the tech is now... perhaps already years ahead of what others "initial awareness" suggests ?

Note, that above, addressing "machine learning" and other limited sets of functions... and the "killer robot problem"... not AGI... end up determining that the biggest threat is in the Unintended Consequences of our enabling machines to mindlessly act, as programmed to do... resulting in our altering our own behavior to simply "let them" without taking responsibility for it... which they suggest is a risk that only occurs along with the removal of "man in the loop" participation...

And, back to "now"... with the focus on current events... and how does AI alter the picture... now ?

So, the patently obvious in the war in Ukraine... that it is being purposefully handicapped and constrained, thus far, ENTIRELY... by Russia's self limits... and by the west imposing limits both on Ukraine's material, and Russia's potential... There is nothing in that which suggests that "accelerations" enabled by Russia (with or without China's help) in spite of the west's attempt to "contain" and throttle the war back... will not be met with an at least "equal and opposite" response ? The more that occurs... the slipperier the slope and the steeper the incline... the more the advantage shifts to Ukraine... who have been hobbled by the west as much or more than enabled by it... enabling the stalemate we see now, in result from Russia's being given the advantage in what they bring to the fight... to allow it to be conducted entirely on their own terms. The west isn't doing more than "countering" that on the same terms... without applying any of their own advantages...

I've discussed that before... as "a repeat of the battles of WW I and WW II"... as if nothing has occurred since then to obviate the utility of fighting wars that way ?

Pair Ukraine's clearly stated desire to "begin training pilots on F-16's now"... because we're going to need them by the time they're ready... with... perhaps some over-statement in this vid in terms of how far along that process is/ was (Youtube)



This is how the F-16 will change the war in Ukraine



An F-16 given to Ukraine... is still a very long way from bringing the material standard in the war in Ukraine up NATO standard... as would enable the sort of capabilities that NATO would bring to a fight... that might work to obviate "superiority" in the sorts of things Russia brings ? But, I do see evidence in that video suggesting that an American F-16 in 2023... is not "the same thing" as an American F-16 was back in the Gulf War ? Russia, meanwhile, hasn't really improved "in that sort of stuff"since the end of the Cold War... certainly not in a meaningful way... or in those things that apply now on the field in Ukraine.

If the war is to change, soon... it is highly unlikely that it is going to change with "more of the same", only... rather than change by altering the utility of "the same" in balance... by changing it out for "something new and different" in the context. And, the F-16 might well be an instance of that...

So, is that true... that pilots need to be trained now... to enable Ukraine's use of F-16's if/when some Russian expansion of their war effort... requires their use ? ...

The US just revealed its AI piloted F-16 can dogfight!
I doubt that AI is slated for combat field tests in F-16's in Ukraine this year... Or, is it ?

But, that's really not the point, still... ? It's obvious that they'd not be doing what appears to be... publicly using F-16's in routine AI software validation... and doing it openly, in at least one mission set... (which clearly is not limited in its application only to the F-16 )... if this was considered to be some big secret ?

It does mention "the AI" as thinking... in this instance... that overcoming resistance is better enabled just by "going big" and being more aggressive in engaging... which clearly makes sense in the military context, more where there's no pink body in the seat...

But, "to increase the odds of winning"... in other context... whether you should or not ?

Is that "the same" as what the AI is telling Justin UnTrudeux he should do in suppressing dissent in Canada ?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext