The New AI Teaches Humanity How to be Better Liars and Will Never Replace Humans

It has been a year and a half now since the first Large Language Model (LLM) AI app was introduced to the public in November of 2022, with the release of Microsoft's ChatGPT, developed by OpenAI. Google, Elon Musk, and many others have also now developed or are in the process of developing their own versions of these AI programs, but after 18 months now, the #1 problem for these LLM AI programs remains the fact that they still lie and make stuff up when asked questions too difficult for them to answer. It is called "hallucination" in the tech world, and while there was great hope when Microsoft introduced the first version of this class of AI back in 2022 that it would soon render accurate results, that accuracy remains illusive, as they continue to "hallucinate." Many are beginning to understand this limitation in LLM AI, and are realizing that there are no real solutions to this problem, because it is an inherent limitation of artificial computer-based "intelligence." A synonym of the word "artificial" is "fake", or "not real." Instead of referring to this kind of computer language as AI, we would probably be more accurate in just calling it FI, Fake Intelligence. But these LLM's don't actually create anything new. They take existing data that has been fed to them, and can now rapidly calculate that data at speeds so fast that it makes the older technology that powers programs like Siri and Alexa seem to be babies who have not yet learned how to talk like adults. But it is still limited to the amount, and the accuracy, of the data it is trained on. It might be able to "create" new language structures by manipulating the data, but it cannot create the data itself. Another way to look at it, would be to observe that what it is doing in the real world is making humans better liars, by not accurately representing the core data. Another serious flaw with the new LLM AI models is that everything they generate from existing data is data that was created by someone and is cataloged on the Internet, which means that whatever is generated by this AI, whether it is accurate or not, is THEFT!

OpenAI Hit With First Defamation Suit Over ChatGPT “Hallucination” – Exposing the Real Dangers of AI

AI chat programs have become such a huge part of the online culture so quickly, that many people are still fooled by its infancy and limitations, even though the owners of these AI programs have been very clear to warn the public that the text they produce CANNOT be trusted since they often produce false information or even just make stuff up, if the data that is needed to produce a correct response is not available. They refer to this false information as AI "hallucination." Two recent news stories demonstrate just how foolish and dangerous it is to use programs like ChatGPT for real world applications by trusting in the output that ChatGPT provides. Isaiah Poritz of Bloomberg Law reported this week that OpenAI, the company that produces ChatGPT, was hit with its first defamation lawsuit when it allegedly falsely accused a Georgia man of embezzling money. In another recent report, an attorney was actually foolish enough to use ChatGPT to research court cases in an actual lawsuit, and it found bogus lawsuits that did not even exist, and this was filed in a New York court of law! The judge, understandably, was outraged. I came across an excellent article today by Aleksandar Svetski exposing the hype around Chat AI and detailing the real dangers to Chat AI, which he refers to as "The Great Homogenization," where all the data on the Internet is controlled to a single narrative, something that I have been warning about as well. This is a must-read article if you want to fully understand what is going on today with Chat AI, and how to fight back.

Unlike the U.S., China Issues Warning about Dangerous ChatGPT AI Financial Bubble

As someone who grew up with modern computer technology and at one time earned my living from it, and as someone who not only lived through the dot.com financial collapse but has also owned an ecommerce business for over 21 years and has survived multiple economic downturns, it has been plainly obvious to me that the current financial frenzy over chat AI hype is one of the largest developing financial bubbles being blown up with no real model of generating revenue at this time. And yet, hardly any other financial analyst has come out to expose this very dangerous financial bubble that could burst at any time, and potentially sink the entire economy, until today. But that financial analysis over the current spending frenzy regarding AI did not come from any financial analysts in the U.S., but by the Chinese Government. China is the world's second largest investor in technology start-ups by venture capitalists, with only the U.S. spending more. The Chinese government might be regulating the AI industry to prevent a financial crash over this wild speculation in the Tech sector over OpenAI, based on an opinion piece published earlier today in a Chinese financial publication. Chinese shares related to artificial intelligence plunged after a state media outlet urged authorities to step up supervision of potential speculation. The ChatGPT concept sector has “signs of a valuation bubble,” with many companies having made little progress in developing the technology, the Economic Daily wrote in a commentary Monday: "Regulators should strengthen monitoring and crackdown on share-price manipulation and speculation to create “a well-disclosed and well-run market,” according to the newspaper, which runs a website officially recognized by Beijing. Companies, it said, should develop the capabilities they propose, while investors should refrain from speculating." Of course, the U.S. is also threatening regulation over the Tech sector, including TikTok, which currently provides $billions to the U.S. economy. The other huge concerns regarding the feeding frenzy over new AI technology, as I reported in a recent article, is that there are legal issues regarding privacy and copyright issues that could severely curtail using the new OpenAI technology, if not outlaw it altogether.

After Losing $150 BILLION on Chat AI Botched Launch, Google Search Head Explains Their AI Suffers “Hallucination” Giving “Convincing but Completely Made-up Answers”

Worried that their competitor, Microsoft, was pulling ahead in the new excitement over ChatGPT AI search results, Google announced this week that they were launching their AI powered search, BARD, and posted a demo on Twitter. Amazingly, Google failed to fact-check the information BARD was giving to the public, and it wasn't long before others figured out that it was giving false information. Google's stocks lost 7.7% of their valuation that day, and then another 4% the next day, for a total loss of over $150 BILLION. Yesterday (Friday, February 10, 2022), Prabhakar Raghavan, senior vice president at Google and head of Google Search, told Germany's Welt am Sonntag newspaper: "This kind of artificial intelligence we're talking about right now can sometimes lead to something we call hallucination. This then expresses itself in such a way that a machine provides a convincing but completely made-up answer," Raghavan said in comments published in German. One of the fundamental tasks, he added, was keeping this to a minimum. This tendency to be prone to "hallucination" does not appear to be unique to Google's AI and chat bot. OpenAI, the company that has developed ChatGPT which Microsoft is investing heavily in, also warns that their AI may also deliver "plausible-sounding but incorrect or nonsensical answers." So since these new AI chat bots are so unreliable and so easily hacked, why are investors and Big Tech companies like Google and Microsoft throwing $BILLIONS into them? Because people are using them, probably hundreds of millions of people. That's the metric that always drives investment in new Big Tech products that are often nothing more than fads and gimmicks. But if a lot of people are using these products, there is money to be made, and yet another way to track and control people.

What ChatGPT and DeepMind Tell Us about AI

The world is agog at the apparent power of ChatGPT and similar programs to compose human-level narratives and generate images from simple commands. Many are succumbing to the temptation to extrapolate these powers to near-infinity, i.e. the Singularity in which AI reaches super-intelligence Nirvana. All the excitement is fun but it's more sensible to start by placing ChatGPT in the context of AI history and our socio-economic system.