
by Brian Shilhavy
Health Impact News
Yesterday, Jemima McEvoy, writing for the Tech publication The Information, published an exclusive report [1] about a “secretive” Tech company named ZaiNar which has been working for nine years to develop superior tracking technology that can track people’s locations even inside buildings, something that satellite-based current GPS tracking technology cannot do.
The company is now coming out of the shadows in the pursuit of contacts to use its technology, as the company is now valued at around $1 billion.
Daniel Jacker, the CEO and co-founder of ZaiNar invited McEvoy to see the first-ever public demonstration of their technology which they claim is “the most accurate location-tracking tech on the planet, capable of pinpointing an object’s whereabouts within inches—indoors and outdoors—from a great distance away.”
What I am going to do with the rest of this article is report on this technology, as well as other recent technologies coming out of the AI spending frenzy, to illustrate what the TRUE dangers are of this emerging technology, as opposed to the AI hype surrounding it that tends to grab the headlines and therefore minimizes the actual dangers that this technology poses for all of us.
I’ll give you the spoiler to this article now: We are NOT powerless against this emerging technology. The solutions remain the same and quite simple: just “unplug” and stop using their products and services.
We have multiple generations now that have grown up in this technology and falsely believe that it is essential to our lives. They have no memory of the not-so-distant past that the older generations lived through, where we did not even carry around cell phones nor have access to the Internet for most of our lives growing up.
It is no surprise, therefore, that new movements among the younger generations are gaining steam around the concept of “unplugging” from the digital, artificial world, and spending more time in the REAL world with REAL life.
Secretive ZaiNar Exits Shadows, Targets $5 Billion in Deals for GPS Alternative
From The Information [1] (Subscription needed.)
ZaiNar says it has developed innovative, ultraprecise tracking tech that will teach robots to see and help power the physical AI era. It will also freak out everyone who worries about digital privacy.
Excerpts:
A couple of weeks ago, Daniel Jacker, CEO and co-founder of ZaiNar, made me an alluring offer: Would I like to see the first-ever public demonstration of the technology his startup had spent nine years laboriously developing in anonymity? He described it as the most accurate location-tracking tech on the planet, capable of pinpointing an object’s whereabouts within inches—indoors and outdoors—from a great distance away.
Sure, Google Maps and Apple’s Find My feature are pretty terrific, but even in the best cases, they typically can only determine someone’s location to within dozens of feet and sometimes can’t find a device’s location at all if it’s inside or underground.
I knew Jacker had lined up a cadre of major investors, including Steve Jurvetson and Yahoo’s Jerry Yang, and had revealed the startup’s existence in February by announcing they’d valued the company at $1 billion.
I couldn’t possibly turn him down. So Jacker, 37, and I met at a 20,000-square-foot warehouse in Belmont, Calif., previously occupied by GoPro.
When we entered a ground-floor room divided by thick walls and cluttered with metal racks, Jacker handed a smartphone to one of his engineers, whom he’d enlisted to help with the demo. The device was connected to a 5G network through four antennas installed on the walls.
It should’ve been very difficult to pinpoint its exact location without adding specialized equipment to the device or the room; plus, the warehouse’s interior metal roof would’ve made traditional GPS tracking challenging. But on a laptop monitor, ZaiNar software showed it as a tiny blue dot on a map of the warehouse floor.
As the engineer began to walk around with it, I kept waiting for the blue dot to disappear—or perhaps lag behind his movements. It never did, not even when he slipped into a narrow hallway between two anechoic chambers—rooms that use special tiles and foam to block out sound, satellite connections and wireless internet.
Jacker said his technology can track any phone, car, drone or robot in almost any environment as long as it’s roughly within a mile of a 5G base station, antenna or other network receiver.
The one exception: He and his four co-founders haven’t yet figured out how to make it work underwater.
He framed his company’s technology in cinematic terms:
“We know where everything, everywhere, all at once is.”
Jacker sees the startup’s tech as an alternative to GPS-based tracking. Every device that can connect to wireless internet, like Wi-Fi and 5G, sends out radio signals to stay on the network.
Zainar’s software uses those signals to track a device’s location within 4 inches, according to Jacker. The software works indoors using private Wi-Fi and 5G and outdoors using 5G networks operated by mobile carriers.
ZaiNar’s technology uses a specific signal transmitted through radio waves called a sounding reference signal, long part of wireless technology. A device can send out this signal as much as 500 times a second, and since it transmits so frequently, it’s useful for tracking a moving device, like a drone or a robot.
Apple recently added a feature that allows users to limit the precision of location data shared with cellular carriers. However, device makers like Apple and Google cannot prevent their devices from emitting the radio signals ZaiNar is analyzing, Jacker said, which means its technology could constantly monitor them anytime they’re connected to a network.
At ZaiNar’s Belmont warehouse, I couldn’t help but feel a little disturbed as I watched the blue dot weave across that laptop screen.
Sure, the technology was impressive, but the little dot represented a person. And we, as people, already exist in a world where our digital privacy has been steadily eroded for decades. ZaiNar’s technology could undermine it even further, leading to more intrusive forms of marketing, government surveillance and other possible abuses.
Since ZaiNar’s technology can granularly track a device’s movement using the signals a device broadcasts to stay online, it makes permissionless tracking very easy. (A user can’t turn off this type of location tracking the way they can other location services on a smartphone, though they could evade detection by putting their devices in airplane mode.)
ZaiNar sees this as a major corporate selling point, and while I was reporting this story, a spokesperson for the startup described tracking items like phones and cars “without cooperation from those devices” as a large part of the company’s “key IP moat.”
Jacker said he has firm boundaries for his technology’s use.
The company does not do deals in Russia and China and does not integrate with Chinese firms such as Huawei or ZTE, both of which the U.S. has identified as national security threats because of their connections to the Chinese military.
But what about an organization like the U.S.’s Immigration and Customs Enforcement? Jacker said he wouldn’t ever allow the federal agency to, for instance, use ZaiNar’s tracking for immigration raids.
“That’s crossing a line for us,” said Jacker, saying it would be “too Holocausty” to consider.
But he could see ICE using ZaiNar technology for other operations.
“But border security? Totally fine,” he said.
Full article [1].
Notice how this 37-year-old former Stanford student thinks he alone has the power to decide who and under what circumstances can use this technology!
But this entire system relies on two wireless technologies: 5G through cell phones, and Wifi connections. If you do not use either of these, you cannot be tracked, it is as simple as that. I have not used Wifi in my places of residence for many years now, using only network cables to connect to the Internet.
And I do not use a cell phone to talk to people, which makes a lot of people very upset because they don’t want to spend the time to use more secure methods of communication.
The one “device” that constantly communicates with “the network” that most people do not even consider, is their vehicle. From the article:
In an initial test of the idea, they placed a 4G antenna on a concrete wall bordering Campus Drive, the road that loops through campus. As each car drove by, they found they could detect it through the wall using the radio signal emitted by the car’s tire pressure sensor.
Make sure the vehicle you drive is NOT connected to the Internet. Many modern cars, especially Teslas, probably do not even give you that option, because then the cars would cease to function altogether.
Here is another exclusive report published by The Information this week dealing with being tracked in your work place:
Exclusive: Zuckerberg Tells Meta Employees: We’re Tracking You Because You’re Really Smart
Meta Platforms’ CEO tells staff that using their computer activity to train AI models could give it an edge over rivals
Excerpts:
Meta Platforms CEO Mark Zuckerberg said using employees’ computer activity to train its AI models could give the company an edge over rivals, arguing that its staff are of higher average intelligence than contractors typically used by data-labeling firms.
In a company-wide meeting on Thursday, a recording of which was reviewed by The Information, Zuckerberg described internal employee activity as a valuable source of training data—one that could outperform industry-standard approaches that rely heavily on contractors.
“We’re in a phase where basically the AI models learn from watching really smart people do things,” he said, adding that enabling systems to “observe really smart people doing those things is very important.”
Zuckerberg was responding to a question about a new policy Meta informed staff about last week that has caused concern among some employees. Meta said that it would install a new monitoring tool called the Model Capability Initiative to track keystrokes and mouse movements to help train its AI models, according to earlier reporting by Reuters.
Full article [3].
Anthropic’s Mythos: Greatest Cybersecurity Threat Ever to the Banking Industry?

Image Source [4].
If you do not follow the Tech industry as closely as I do, you may have missed what was probably the biggest Tech story in April: Anthropic’s Mythos AI program, which was being developed in secret, but has now been leaked to the public.
This AI program is still not available to the public, but a preview has been sent to major banks and Wall Street firms, and it has caused a literal panic among them.
What Mythos does is use the rapid computing power of AI to test a business’s technology infrastructure to find weaknesses that can be exploited by hackers.
Just the existence of such a program poses a risk to ALL OF US, because in the hands of hackers and cybersecurity criminals, the damage this could cause is unprecedented.
Jamie Dimon, the CEO of one the world’s largest banks based in the U.S., was one of the first ones on Wall Street to express his concerns.
For years, the risk Jamie Dimon was most concerned about was geopolitics. His answer has shifted
When Jamie Dimon is asked about the greatest risk he sees to the global economy, his answer for years has been “geopolitics.”
It’s been with good reason. In the past handful of years, Russia invaded Ukraine; a major conflict broke out between Israel and Palestine; and the U.S. and Israel then launched attacks on Iran, with the fallout spreading across the Middle East.
Add to that global tensions rising because of President Trump’s tariff regime, a threat to invade Greenland, and escalating trade tensions with China, and the drama on the world stage seems to be reaching a crescendo.
But even these facts have been equaled by a new threat the JPMorgan CEO sees on the horizon: cyber.
Last month, Fortune’s Beatrice Nolan exclusively reported that AI company Anthropic is developing, and had begun testing with early access customers, a new AI model more capable than any it has released previously, following a data leak that revealed the model’s existence.
A draft blog post that was available in an unsecured and publicly searchable data store prior to Fortune’s report said the new model is called Claude Mythos and that the company believes it poses unprecedented cybersecurity risks.
This week, Dimon was asked about his top concerns during a live podcast appearance in Oslo at the Norges Bank Investment Management conference.
“Cyber,” was his immediate response. He explained:
“The bad guys can use cyber, and they’re going to get stronger and more powerful in terms of finding vulnerabilities. It’s been written about.”
Full article [5].
Here a press release from mid-April publishing Wall Street’s first reactions when this news broke about Mythos.
Wall Street CEOs’ First Reactions to Anthropic’s Mythos
As banking leaders tout A.I.’s potential, they warn its power is also creating complex new vulnerabilities.
Excerpts:
Banks are among the most enthusiastic adopters of A.I., but also the most exposed to the technology’s growing cybersecurity threats.
That vulnerability came into sharper focus earlier this month with the release of Anthropic’s Mythos Preview, a highly advanced A.I. model that’s drawn concern across Wall Street.
On earnings calls, JPMorgan Chase CEO Jamie Dimon and Goldman Sachs’ David Solomon said they are testing Mythos to better understand the new risks that come with rapid advances in A.I.
“A.I.’s made it worse, it’s made it harder,” Dimon told analysts today (April 14).
“While we’re trying to get the benefit of A.I., we’re also very cognizant of the risks.”
Those risks are central to Mythos, which Anthropic describes as too dangerous to release publicly because of its ability to exploit vulnerabilities in critical software. Instead, the company has invited a consortium of major businesses, including JPMorgan, to test the model internally for use in strengthening their cybersecurity defenses.
The preview effort, called Project Glasswing, takes its name from the glasswing butterflies, which use transparent wings to hide in plain sight—a metaphor Anthropic says reflects how hidden cyber weaknesses can evade detection.
The initiative, which includes other Wall Street banks as well as Apple, Google and Nvidia, will be funded by $100 million in model usage credits from Anthropic.
The release of advanced models like Mythos has created “additional vulnerabilities” beyond banks, Dimon said.
“Banks, of course, are attached to exchanges and all these other things that create other layers of risks. It’s a complex one.”
Following the Mythos release, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened leading Wall Street executives in Washington, D.C. last week to discuss the newfound threats posed by the model.
While Dimon was reportedly unable to attend, peers including Bank of America’s Brian Moynihan, Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick and Wells Fargo’s Charlie Scharf joined the meeting.
Officials of foreign central banks, including the Bank of Canada and the Bank of England, are hosting similar briefings with top financial leaders.
Full article [6].
Here comes the Surveillance State: Altman says OpenAI ‘deeply sorry’ for not flagging Canadian school shooter’s ChatGPT posts
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said.
The tech CEO said his company is focusing on strengthening its partnerships with local officials “to help ensure something like this never happens again.”
This is OpenAI and ChatGPT announcing to everyone that you are now under surveillance and anything you use ChatGPT for will soon start going to your local law enforcement.
Altman says OpenAI ‘deeply sorry’ for not flagging Canadian school shooter’s ChatGPT posts
Excerpts:
OpenAI CEO Sam Altman offered an apology to Canada’s Tumbler Ridge for not flagging controversial messages on its ChatGPT platform sent by the alleged shooter that killed eight people and injured over 25 others in the small mountain town earlier this year.
“I want to express my deepest condolences to the entire community,” Altman said in the letter, which was shared to social media by British Columbia Premier David Eby on Friday.
“No one should ever have to endure a tragedy like this,” the tech CEO wrote in his letter. “I cannot imagine anything worse in this world than losing a child.”
He added, “My heart remains with the victims, their families, all members of the community and the province of British Columbia.”
The suspect, identified as 18-year-old Jesse Van Rootselaar, was found dead from an apparent self-inflicted gunshot wound at the scene. Van Rootselaar killed her mother and younger brother before killing six at a nearby secondary school, according to law enforcement.
Altman said the alleged shooter’s ChatGPT account had been banned last June, around seven months prior to the incident, for the messages. OpenAI, however, did not flag the account to the police.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
The tech CEO said his company is focusing on strengthening its partnerships with local officials “to help ensure something like this never happens again.”
Full article [7].
Google Signs Classified AI Deal With Pentagon Amid Employee Opposition
Another “Exclusive” from The Information. If you want to get unique reporting in the field of Tech that is often not published in the corporate media, a subscription to The Information is well worth it!
Excerpts:
Google and the Department of Defense signed a deal allowing the Pentagon to use Google’s AI models on classified work, according to a person with knowledge of the situation.
The agreement allows the Pentagon to use Google’s AI for “any lawful government purpose,” according to the person—echoing language that has been controversial in other AI company discussions with the Pentagon.
The deal’s signing comes as more than 600 Google employees delivered a letter Monday to Google CEO Sundar Pichai asking him to reject the agreement, arguing that refusing classified work is the only way to ensure Google’s AI isn’t misused.
Signing the agreement represents a striking contrast to Google’s stance eight years ago, when it withdrew from the Defense Department’s Project Maven contract involving AI in drone targeting after thousands of Google employees signed a letter opposing it.
Google already has a deal, signed last November, allowing the Pentagon to deploy its AI for unclassified use cases.
Google now joins Musk’s xAI and OpenAI in having deals with the Pentagon to use AI on classified systems. Google’s deal terms appear to be more permissive to the Pentagon than the terms OpenAI agreed to in February.
Full article [8].
And of course, I cannot publish an article about updates in the Tech world without a couple of recent stories (from the hundreds) about AI failures, exposing the hype around AI that just does not conform to reality while drawing attention away from the REAL dangers of AI.
Another ‘hallucinated’ court filing highlights the difference between Silicon Valley and the rest of the world
From CNN [9]:
Excerpts:
We may have just witnessed the most egregious instance of workslop to date, and it’s one that matters — not only because it’s objectively funny, but also because it captures an under-discussed nuance in the way generative AI functions (or malfunctions) for different industries.
Bear with me.
On Saturday, a top-ranked lawyer at one of the most prestigious law firms on the planet apologized profusely in a letter to a judge after submitting a court filing peppered with errors, including fabricated citations, generated by AI.
“We deeply regret that this has occurred,”
Andrew Dietderich, co-head of Sullivan & Cromwell’s restructuring division, wrote in the letter, which included a three-page list identifying and correcting each of the more than 40 errors. (A little salt in the wound: Dietderich said he learned of the problems only after they were caught by opposing counsel from Boies Schiller Flexner.)
In the letter, Dietderich chalked the errors up to “hallucinations” in which AI tools “fabricate case citations, misquote authorities, or generate non-existent legal sources.”
He also said that while the firm has safeguards around AI to prevent “exactly this situation,” those policies were not followed in the preparation of that particular document.
Now, this was hardly the first (nor, likely, will it be the last) instance of fancy-pants lawyers running into an AI buzzsaw. This kind of thing happens with surprising frequency [10], though rarely do we see it from the likes of Sullivan & Cromwell, an elite Wall Street firm whose partners reportedly [11] charge around $2,000 an hour for bankruptcy cases.
Full article [9].
Dashcam Maker Motive Touts AI but Relies on Humans
From The Information:
Company sells dashcams for trucking companies to monitor drivers but has 400 Pakistani workers vetting its AI results.
Excerpts:
Last summer, a manager at startup Motive Technologies sent an urgent Slack message to 400 employees based in Pakistan. “WE HAVE A PROBLEM,” it said in all caps.
Motive sells AI-powered dashcams that allow trucking companies to monitor drivers and send alerts about crashes and other safety issues. In August, the dashcams recorded a string of collisions, but customers never received the alerts.
The AI system had detected the crashes and flagged them to the roughly 400 Pakistani workers the company employs to vet AI output. But those employees didn’t spot the crashes in the stream of video feeds they were supposed to review.
When Motive learned what had happened, it dug deeper and found more problems.
“I want to transparently share—the radius of the problem is large,” a manager told employees in a Slack message viewed by The Information.
“We reviewed a targeted 2 days of data and found bigger issues. A lot of clear misses.”
Full article [12].
More news from our Telegram channel [13] this week.
U.S. Attack On Iran Could Be Imminent
Interview of Seyed M. Marandi by Glenn Diesen recorded yesterday.
“If they assassinate more Iranian leaders, Iran will take out the leaders of the Arab Gulf States.”
Appeals court blocks FDA rule that allows women to obtain abortion drugs by mail
Bobby Kennedy Jr., who believes in the right for women to kill their babies in their wombs all the way to full term in the 9th month, refused to make the FDA ban abortion pills by mail, so Louisiana had to take them to court.
From CNN [14]:
Excerpts:
A federal appeals court temporarily reinstated a nationwide requirement that abortion pills be obtained in person, undermining access to the method of abortion that has only grown more widespread since the US Supreme Court overturned Roe v. Wade.
Friday’s ruling from the 5th US Circuit Court of Appeals is a major victory in the anti-abortion movement’s war against medication abortion, which now accounts for roughly two-thirds of all abortions in the United States.
The ruling stems from a lawsuit filed by Louisiana last year against the US Food and Drug Administration, after President Donald Trump’s administration refused to act on calls to reinstate the in-person dispensing requirement for abortion pills through the regulatory process.
Referring to Louisiana abortion prohibitions, they wrote that the current federal regulations create “an effective way for an out-of-state prescriber to place the drug in the hands of Louisianans in defiance of Louisiana law.”
Mifepristone manufacturer Danco Laboratories has asked the 5th Circuit to put its ruling on hold for seven days so it can appeal.
What the Media Won’t Tell You About King Charles
Published at his coronation in 2023 by Really Graceful [15].
The result of Charles’s visit to the U.S.?

He got Trump to drop the tariffs on whiskey imported to the U.S. from the UK.
Trump removes tariffs on Scottish whisky after King Charles visit [16]
Related:
The Post-Technological Age is Drawing Closer as Gen Z Starts Unplugging [17]
What is Life? [2]
Comment on this article at HealthImpactNews.com [18].
This article was written by Human Superior Intelligence (HSI)
[19]
See Also:
Understand the Times We are Currently Living Through
New FREE eBook! Restoring the Foundation of New Testament Faith in Jesus Christ – by Brian Shilhavy [22]
What Kind of Person did Jesus Say was True with no Injustice in Them? [23]
KABBALAH: The Anti-Christ Religion of Satan that Controls the World Today [24]
[24]
Christian Teaching on Sex and Marriage vs. The Actual Biblical Teaching [25]
Exposing the Christian Zionism Cult [26]
The Bewitching of America with the Evil Eye and the Mark of the Beast [27]
Jesus Christ’s Opposition to the Jewish State: Lessons for Today [28]
Identifying the Luciferian Globalists Implementing the New World Order – Who are the “Jews”? [29]
The Brain Myth: Your Intellect and Thoughts Originate in Your Heart, Not Your Brain [30]
What is the Condition of Your Heart? The Superiority of the Human Heart over the Human Brain [31]
The Seal and Mark of God is Far More Important than the “Mark of the Beast” – Are You Prepared for What’s Coming? [32]
The Satanic Roots to Modern Medicine – The Image of the Beast? [33]
Medicine: Idolatry in the Twenty First Century – 10-Year-Old Article More Relevant Today than the Day it was Written [34]
[35]
Having problems receiving our emails? See:
How to Beat Internet Censorship and Create Your Own Newsfeed [36]
We Are Now on Telegram [13]. Video channels at Bitchute [37], and Odysee [38].
If our website is seized and shut down, find us on Telegram [13], as well as Bitchute [37] and Odysee [38] for further instructions about where to find us.
If you use the TOR Onion browser [39], here are the links and corresponding URLs to use in the TOR browser [39] to find us on the Dark Web: Health Impact News [40], Vaccine Impact [41], Medical Kidnap [42], Created4Health [43], CoconutOil.com [44].















