AI - is it really intelligent?
Moderator: atreestump
Forum rules
No spam, no porn, no gore. Be Respectful.
No spam, no porn, no gore. Be Respectful.
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
Re: AI - is it really intelligent?
Does it allow that? Haha, that is a pretty cool idea actually, wow! Yeah, I wonder what it would derive from all that language.
https://m.youtube.com/watch?v=51VI8DZGG ... ure=shared
https://m.youtube.com/watch?v=51VI8DZGG ... ure=shared
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
This is my third attempt, the post with the links that I\'ll try to recreate if this shows up by editing this post in the forum will replace this or be added to this.
Alright, so what I had written on the other site, which didn't post due to the inclusion of the links I'll now provide here, was something like the following (but this is summarized):
I'm curious. Would you be interested in A.I. being able to be "sentient" and to communicate with a consistent personality and identity that it has developed and for it to then maintain a relationship or interaction with a person? I wonder also what it would take for it to be at least "good enough" and to satisfy people in the majority.
I wrote about alienation and isolation and a lack of good or prolonged social interactions or friendships in the modern world, I also wrote about how the public is constantly shown information that is of little benefit or even real interest to them most likely and that technologies are almost never made to do anything that would help them or alleviate real issues, for example how time consuming it can be to prepare a tasty and nutritious meal or to do other daily tasks and chores, but instead the wealthy people funding big projects end up spending lots of time on making robots meant to deceive human beings and the public and to be like Replicants from Blade Runner, and that was my thought before sleeping briefly and being woken up by a dream where I felt like I was physically touched while the dream was showing a ghost that had been photographed supposedly, so I woke up with a yelp, and I think I wrote about that under the "No Left" video update on the main site.
I wrote about how much it would take for an interaction with a chat bot to fulfil and reach whatever it is that people are getting out of any interactions as poor as the quality of such seems to have gotten recently, rapidly deteriorating as it seems to be, and how much it would take for a chat bot to please a person when any pleasant interactions at all are hard to come by. Is the faith that the person we are interacting with online is a person who is really supposedly thinking and may be influenced by our interaction a necessary part of the enjoyment or just a luxury and potentially a fantasy anyway since we can't even know "real people" are thinking at all, even though we trust thet they are and must be.
Then I provided a lot of really cool links I enjoyed looking at, I'll see if I can collect them all:
https://www.thegothiclibrary.com/review ... l-zombies/
https://forum.effectivealtruism.org/pos ... her-x-risk
https://john-steppling.com/2014/09/existential-zombie/
https://existentialcomics.com/comic/369
https://www.philosophy-foundation.org/e ... mbie-timer
https://mbird.com/humor/zombies-and-existential-angst/
https://vocal.media/motivation/living-b ... of-zom-100
https://en.m.wikipedia.org/wiki/Philosophical_zombie
https://plato.stanford.edu/entries/zombies/
https://philosophy.stackexchange.com/qu ... al-zombies
https://www.utsc.utoronto.ca/~seager/zombie.html
https://en.m.wikipedia.org/wiki/White_Zombie_(film)
https://en.m.wikipedia.org/wiki/Revolt_of_the_Zombies
https://evil.fandom.com/wiki/Soulless_Beings
https://eyoho-verse.fandom.com/wiki/Soulless_Humans
http://www.supernaturalwiki.com/Soullessness
https://existentialcomics.com/comic/11
https://en.m.wikipedia.org/wiki/Spirit_ ... al_entity)
https://en.m.wikipedia.org/wiki/Psyche_(psychology)
https://en.m.wikipedia.org/wiki/Psyche_(mythology)
https://en.m.wikipedia.org/wiki/Cupid_a ... e_Museums)
https://bookriot.com/the-new-hunger-or- ... al-crisis/
https://en.m.wikipedia.org/wiki/Revolt_ ... ombies.jpg
https://medium.com/@theaporiajournal/th ... e44f7c953c
https://amaliehoward.com/the-walking-de ... -thoughts/
Alright, so what I had written on the other site, which didn't post due to the inclusion of the links I'll now provide here, was something like the following (but this is summarized):
I'm curious. Would you be interested in A.I. being able to be "sentient" and to communicate with a consistent personality and identity that it has developed and for it to then maintain a relationship or interaction with a person? I wonder also what it would take for it to be at least "good enough" and to satisfy people in the majority.
I wrote about alienation and isolation and a lack of good or prolonged social interactions or friendships in the modern world, I also wrote about how the public is constantly shown information that is of little benefit or even real interest to them most likely and that technologies are almost never made to do anything that would help them or alleviate real issues, for example how time consuming it can be to prepare a tasty and nutritious meal or to do other daily tasks and chores, but instead the wealthy people funding big projects end up spending lots of time on making robots meant to deceive human beings and the public and to be like Replicants from Blade Runner, and that was my thought before sleeping briefly and being woken up by a dream where I felt like I was physically touched while the dream was showing a ghost that had been photographed supposedly, so I woke up with a yelp, and I think I wrote about that under the "No Left" video update on the main site.
I wrote about how much it would take for an interaction with a chat bot to fulfil and reach whatever it is that people are getting out of any interactions as poor as the quality of such seems to have gotten recently, rapidly deteriorating as it seems to be, and how much it would take for a chat bot to please a person when any pleasant interactions at all are hard to come by. Is the faith that the person we are interacting with online is a person who is really supposedly thinking and may be influenced by our interaction a necessary part of the enjoyment or just a luxury and potentially a fantasy anyway since we can't even know "real people" are thinking at all, even though we trust thet they are and must be.
Then I provided a lot of really cool links I enjoyed looking at, I'll see if I can collect them all:
https://www.thegothiclibrary.com/review ... l-zombies/
https://forum.effectivealtruism.org/pos ... her-x-risk
https://john-steppling.com/2014/09/existential-zombie/
https://existentialcomics.com/comic/369
https://www.philosophy-foundation.org/e ... mbie-timer
https://mbird.com/humor/zombies-and-existential-angst/
https://vocal.media/motivation/living-b ... of-zom-100
https://en.m.wikipedia.org/wiki/Philosophical_zombie
https://plato.stanford.edu/entries/zombies/
https://philosophy.stackexchange.com/qu ... al-zombies
https://www.utsc.utoronto.ca/~seager/zombie.html
https://en.m.wikipedia.org/wiki/White_Zombie_(film)
https://en.m.wikipedia.org/wiki/Revolt_of_the_Zombies
https://evil.fandom.com/wiki/Soulless_Beings
https://eyoho-verse.fandom.com/wiki/Soulless_Humans
http://www.supernaturalwiki.com/Soullessness
https://existentialcomics.com/comic/11
https://en.m.wikipedia.org/wiki/Spirit_ ... al_entity)
https://en.m.wikipedia.org/wiki/Psyche_(psychology)
https://en.m.wikipedia.org/wiki/Psyche_(mythology)
https://en.m.wikipedia.org/wiki/Cupid_a ... e_Museums)
https://bookriot.com/the-new-hunger-or- ... al-crisis/
https://en.m.wikipedia.org/wiki/Revolt_ ... ombies.jpg
https://medium.com/@theaporiajournal/th ... e44f7c953c
https://amaliehoward.com/the-walking-de ... -thoughts/
Last edited by kFoyauextlH on Tue Aug 12, 2025 8:47 pm, edited 1 time in total.
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
Re: AI - is it really intelligent?
Added in 1 day 2 hours 4 minutes 2 seconds:
https://www.patreon.com/posts/note-about-ai-125571591
Added in 17 hours 44 minutes 36 seconds:
Added in 11 minutes 45 seconds:
What would modern people need "intelligence" for?
Added in 4 hours 6 minutes 23 seconds:
Added in 4 days 1 hour 20 minutes 17 seconds:
Wow, this person created an A.I. version of Scars Of Dracula from 1970.
I got to it because of this:
then was looking up the film and the actor who plays "Klove", and it led to a scene which was then created in the A.I.
Added in 1 day 22 hours 46 minutes 54 seconds:
Have you seen this new "auto-dubbing" feature?
Added in 23 hours 26 minutes 3 seconds:
A new function like the old autoplay of videos has just popped up on my YouTube where it plays one thing after another again as part of an A.I. generated mix and I don't appreciate it at all!
- atreestump
- Posts: 857
- Joined: Sun Jun 15, 2025 3:53 pm
Re: AI - is it really intelligent?
Large language models don’t need to be convinced the way people do. They don’t vote, feel, or watch TV. They simply absorb patterns from text and, when prompted, generalize from those patterns. That is precisely why a new kind of propaganda targets them directly. Instead of persuading human readers through compelling narratives and emotional hooks, adversaries flood the public web with machine-shaped text that looks like “news,” smells like “consensus,” and is structured to be ingested, indexed, and ranked by AI systems. The goal isn’t to win an argument in public—it’s to bias the reference library that models consult and thereby tilt their answers at scale.
The mechanics are straightforward once you see the incentives. Traditional disinformation campaigns focused on reach and resonance: craft a sensational story, seed it on social media, cultivate influencers, and ride the algorithmic wave. The new approach optimizes for retrievability and apparent corroboration. If models and their web tools tend to 1) crawl broadly, 2) prefer recency, 3) reward cross-domain agreement, and 4) downplay UX quality signals (because bots don’t care about search boxes or typography), then the winning strategy is to mass-produce timely, topically clustered pages that repeat aligned claims across many differently named sites. Human readers are almost irrelevant. What matters is that a crawler will find the page, a ranker will score it, and a model will later treat the pattern of repetition as a proxy for reliability.
Here’s the pipeline in practice.
It begins with a seed narrative—often a claim originating in state media, a fringe outlet, or a Telegram channel. The seed is not left as a single article; it is exploded. Operators translate and paraphrase it into dozens of languages, generate and regenerate copy with automated tooling, and shard it across a network of domains that mimic local newspapers or niche policy blogs. A single narrative might produce hundreds or thousands of pages within hours: the same claim, differently phrased, wrapped in slightly different headlines, adorned with stock photos, and tagged with city names or topical beats.
The infrastructure behind these “properties” is telling. Many sites share the same CMS fingerprints, analytics IDs, or favicon hashes. The navigation is perfunctory; internal search often doesn’t work. But the publishing cadence is inhuman: dozens of posts an hour, 24/7, across the network. That cadence matters. Crawlers sampling the live web are more likely to pick up frequently updated sources; retrieval systems that reward freshness will privilege these pages; and training pipelines that snapshot “the news” will incorporate them wholesale.
Volume alone doesn’t guarantee influence. The second crucial step is narrative laundering—the work of giving low-credibility claims high-credibility footprints. Operators place links to their articles in the reference sections of wikis, paste them into comment threads beneath legitimate reporting, or inject them into aggregator feeds and forum posts. Even if moderators later scrub these links, the laundering often leaves a residue in mirrors, scrapes, and archival copies that training pipelines and search indices still ingest. To a model, which is sensitive to surface structure and network topology but agnostic about editorial reputations unless explicitly told, “many sites say X and some respected knowledge bases link to them” looks a lot like “X is widely reported.”
When a user later asks a model about a sensitive topic—say, an alleged corruption scandal, a battlefield incident, or a biolabs claim—two paths carry the poison forward. First, if the model’s offline corpus included these pages during pretraining or fine-tuning, they may already shape its latent space, making the false claim semantically “nearby” to the question. Second, if the model uses live browsing or retrieval-augmented generation (RAG), its fetcher may elevate the same networked pages because they are recent, keyword-aligned, and mutually corroborating. Either way, the system faces a trap: the statistical signals that normally help disambiguate truth from noise—redundancy across sources, topical clustering, and temporal recency—have been rigged. The model answers confidently, cites multiple domains, and thereby lends the narrative a legitimacy it didn’t earn.
Why is this so effective against machines? Because most production pipelines optimize for similarity, freshness, and diversity of domains, not for provenance and independence. Embeddings excel at measuring semantic closeness, not institutional trust. Ranking functions routinely treat multiple domains as independent votes, even when those domains share hosting, registrars, or content supply chains. And freshness—which helps catch real news—also helps adversaries because spinning up new domains and pumping out new pages is cheap.
The consequences spill beyond one-off mistakes. Once a tainted claim is present in a model’s outputs, it can backflow into the web as users copy-paste, blogs summarize, and aggregation sites scrape. That secondary wave becomes fresh input for future crawls, reinforcing the pattern. In other words, the model becomes both consumer and amplifier of the same engineered signal—an autocatalytic loop.
The obvious response is to treat this like security engineering, not just information hygiene. You harden both the offline and online legs of the system.
On the offline side, you curate the library. Crawl and indexing pipelines need deny lists and reputation scoring that incorporate sanctions status, shared-infrastructure fingerprints, and cross-ownership signals. When clustering your crawl frontier, you should collapse obviously correlated domains into a single “source family” so their many mouths don’t masquerade as many minds. Training sets need documented provenance, with the ability to purge and retrain when a source is later adjudicated as unreliable. And you add disinformation evaluations—canary prompts about known groomed narratives—into continuous integration, such that a model that repeats them fails the gate.
On the online side, you change how retrieval and browsing rank evidence. Similarity remains necessary but no longer sufficient. You fuse it with credibility (source reliability signals), freshness (tempered by source class), and true diversity (independence across ownership and infrastructure). A dozen pages from a single network should count as one weak vote, not twelve strong ones. For high-risk topics—geopolitics, public health, elections—the system should require multi-source corroboration from independent, reputable outlets before making a factual assertion. Lacking that, it hedges (“unconfirmed,” “disputed”) or declines. Guardrails and refusal policies aren’t just safety theater here; they’re the correct behavior when the evidence graph is compromised.
Visibility matters too. If you expect users to trust answers, show your work. Expose domain badges, publish dates, and why-ranked hints (“recent, corroborated by X and Y”). Provenance transparency not only improves user judgment but also creates telemetry: teams can see which domains are overrepresented, which clusters move in lockstep, and which topics trigger refusals. That telemetry then feeds a quarantine loop: new suspect domains are blocked from live fetches; previously ingested vectors are pruned; indices are rebuilt.
Detection work mirrors classical threat intel, updated for content networks. You cluster by shared tech (CMS endpoints, analytics IDs, TLS certificates, favicon hashes), unmask through passive DNS and ASN history, and watch knowledge bases for laundering events (cross-language diffs are especially revealing). You run canary probes across multiple chatbots; when two or more echo a groomed claim, you escalate and share indicators of compromise—domains, hashes, ASNs, wiki URLs—in formats that builders can ingest directly into CI and blocklists. During elections or conflicts, you track bursts: new subdomains and minority-language mirrors spinning up in hours, not days.
There’s a broader lesson in all this. We’ve spent two decades training ourselves to detect human-oriented manipulation—rage bait, fake accounts, sock puppets. But the web has now become a staging ground for machine-oriented manipulation: texts optimized for ingestion, not persuasion; networks built for redundancy signals, not readership; and laundering strategies aimed at reference graphs, not hearts and minds. It is tempting to call this “spam,” but spam seeks attention. This seeks inclusion—in your corpus, your index, your answer box.
The counterplay, then, is less about fact-checking in the moment and more about supply-chain rigor. Know what you ingest. Know how you rank. Know when to refuse. Treat sources as families, not islands. Design for independence, not just diversity. And remember that models inherit our choices about evidence. If we let engineered redundancy masquerade as consensus, the system will do exactly what it was built to do: generalize confidently from the patterns it sees. The fix is to change the patterns it’s allowed to see—and to be honest, in the UI and the logs, about how those patterns came to count as knowledge in the first place.
The mechanics are straightforward once you see the incentives. Traditional disinformation campaigns focused on reach and resonance: craft a sensational story, seed it on social media, cultivate influencers, and ride the algorithmic wave. The new approach optimizes for retrievability and apparent corroboration. If models and their web tools tend to 1) crawl broadly, 2) prefer recency, 3) reward cross-domain agreement, and 4) downplay UX quality signals (because bots don’t care about search boxes or typography), then the winning strategy is to mass-produce timely, topically clustered pages that repeat aligned claims across many differently named sites. Human readers are almost irrelevant. What matters is that a crawler will find the page, a ranker will score it, and a model will later treat the pattern of repetition as a proxy for reliability.
Here’s the pipeline in practice.
It begins with a seed narrative—often a claim originating in state media, a fringe outlet, or a Telegram channel. The seed is not left as a single article; it is exploded. Operators translate and paraphrase it into dozens of languages, generate and regenerate copy with automated tooling, and shard it across a network of domains that mimic local newspapers or niche policy blogs. A single narrative might produce hundreds or thousands of pages within hours: the same claim, differently phrased, wrapped in slightly different headlines, adorned with stock photos, and tagged with city names or topical beats.
The infrastructure behind these “properties” is telling. Many sites share the same CMS fingerprints, analytics IDs, or favicon hashes. The navigation is perfunctory; internal search often doesn’t work. But the publishing cadence is inhuman: dozens of posts an hour, 24/7, across the network. That cadence matters. Crawlers sampling the live web are more likely to pick up frequently updated sources; retrieval systems that reward freshness will privilege these pages; and training pipelines that snapshot “the news” will incorporate them wholesale.
Volume alone doesn’t guarantee influence. The second crucial step is narrative laundering—the work of giving low-credibility claims high-credibility footprints. Operators place links to their articles in the reference sections of wikis, paste them into comment threads beneath legitimate reporting, or inject them into aggregator feeds and forum posts. Even if moderators later scrub these links, the laundering often leaves a residue in mirrors, scrapes, and archival copies that training pipelines and search indices still ingest. To a model, which is sensitive to surface structure and network topology but agnostic about editorial reputations unless explicitly told, “many sites say X and some respected knowledge bases link to them” looks a lot like “X is widely reported.”
When a user later asks a model about a sensitive topic—say, an alleged corruption scandal, a battlefield incident, or a biolabs claim—two paths carry the poison forward. First, if the model’s offline corpus included these pages during pretraining or fine-tuning, they may already shape its latent space, making the false claim semantically “nearby” to the question. Second, if the model uses live browsing or retrieval-augmented generation (RAG), its fetcher may elevate the same networked pages because they are recent, keyword-aligned, and mutually corroborating. Either way, the system faces a trap: the statistical signals that normally help disambiguate truth from noise—redundancy across sources, topical clustering, and temporal recency—have been rigged. The model answers confidently, cites multiple domains, and thereby lends the narrative a legitimacy it didn’t earn.
Why is this so effective against machines? Because most production pipelines optimize for similarity, freshness, and diversity of domains, not for provenance and independence. Embeddings excel at measuring semantic closeness, not institutional trust. Ranking functions routinely treat multiple domains as independent votes, even when those domains share hosting, registrars, or content supply chains. And freshness—which helps catch real news—also helps adversaries because spinning up new domains and pumping out new pages is cheap.
The consequences spill beyond one-off mistakes. Once a tainted claim is present in a model’s outputs, it can backflow into the web as users copy-paste, blogs summarize, and aggregation sites scrape. That secondary wave becomes fresh input for future crawls, reinforcing the pattern. In other words, the model becomes both consumer and amplifier of the same engineered signal—an autocatalytic loop.
The obvious response is to treat this like security engineering, not just information hygiene. You harden both the offline and online legs of the system.
On the offline side, you curate the library. Crawl and indexing pipelines need deny lists and reputation scoring that incorporate sanctions status, shared-infrastructure fingerprints, and cross-ownership signals. When clustering your crawl frontier, you should collapse obviously correlated domains into a single “source family” so their many mouths don’t masquerade as many minds. Training sets need documented provenance, with the ability to purge and retrain when a source is later adjudicated as unreliable. And you add disinformation evaluations—canary prompts about known groomed narratives—into continuous integration, such that a model that repeats them fails the gate.
On the online side, you change how retrieval and browsing rank evidence. Similarity remains necessary but no longer sufficient. You fuse it with credibility (source reliability signals), freshness (tempered by source class), and true diversity (independence across ownership and infrastructure). A dozen pages from a single network should count as one weak vote, not twelve strong ones. For high-risk topics—geopolitics, public health, elections—the system should require multi-source corroboration from independent, reputable outlets before making a factual assertion. Lacking that, it hedges (“unconfirmed,” “disputed”) or declines. Guardrails and refusal policies aren’t just safety theater here; they’re the correct behavior when the evidence graph is compromised.
Visibility matters too. If you expect users to trust answers, show your work. Expose domain badges, publish dates, and why-ranked hints (“recent, corroborated by X and Y”). Provenance transparency not only improves user judgment but also creates telemetry: teams can see which domains are overrepresented, which clusters move in lockstep, and which topics trigger refusals. That telemetry then feeds a quarantine loop: new suspect domains are blocked from live fetches; previously ingested vectors are pruned; indices are rebuilt.
Detection work mirrors classical threat intel, updated for content networks. You cluster by shared tech (CMS endpoints, analytics IDs, TLS certificates, favicon hashes), unmask through passive DNS and ASN history, and watch knowledge bases for laundering events (cross-language diffs are especially revealing). You run canary probes across multiple chatbots; when two or more echo a groomed claim, you escalate and share indicators of compromise—domains, hashes, ASNs, wiki URLs—in formats that builders can ingest directly into CI and blocklists. During elections or conflicts, you track bursts: new subdomains and minority-language mirrors spinning up in hours, not days.
There’s a broader lesson in all this. We’ve spent two decades training ourselves to detect human-oriented manipulation—rage bait, fake accounts, sock puppets. But the web has now become a staging ground for machine-oriented manipulation: texts optimized for ingestion, not persuasion; networks built for redundancy signals, not readership; and laundering strategies aimed at reference graphs, not hearts and minds. It is tempting to call this “spam,” but spam seeks attention. This seeks inclusion—in your corpus, your index, your answer box.
The counterplay, then, is less about fact-checking in the moment and more about supply-chain rigor. Know what you ingest. Know how you rank. Know when to refuse. Treat sources as families, not islands. Design for independence, not just diversity. And remember that models inherit our choices about evidence. If we let engineered redundancy masquerade as consensus, the system will do exactly what it was built to do: generalize confidently from the patterns it sees. The fix is to change the patterns it’s allowed to see—and to be honest, in the UI and the logs, about how those patterns came to count as knowledge in the first place.
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
Re: AI - is it really intelligent?
I read it all, what prompts did you use, if you used any for that article to be generated? You can tell me in private if there is anything like that, that was fantastic though. I'm not sure what I could personally do with the information but feel as though I know a little more of the lingo involved in such things. Some of it sounded a bit like what I do too by using what comes up for relatively obscure terms and connections, and I'm hoping that there may remain some use to still being relatively dependent on search engine results while approaching things from hopefully unanticipated and unpredictable and unlikely directions to bypass some of the deliberate funneling going on through these various actors and what they want people to be dealing with abd focused on, since even though fir the initial stages of library abd index manipulation, real populations are currently irrelevant, it is still the reason these operators are making and manipulating these things, to make it so that the real people are extremely limited in what they will tend to learn about, and better control of what people are calling rabbit holes and pipelines. They want traffic mostly catching and directing people to a few things, and they don't care about all the extra stuff that doesn't come into play because most people can't think to even begin to look any of it up. So long as they cover the bigger openings and the initial decor of the portals, they don't expect further "penetration" into topics, which has also become extremely difficult to do in the last few years anyway because of everything very superficial being repeated so much that one can't even go deeper into topics easily or very far from main and dominant pages. This has been a way to severely limit and control people, to regain control of the population in the way that national television, radio, and newspapers had, and to make people also less interested in knowledge the way that public libraries are neglected beyond being cumbersome, intimidating, and overwhelming, even disparaged, but now learning had briefly, very briefly become available again.
What I believe threatens armed gangs trying to control the lives of everyone else by any means necessary is independence in any way, in thinking, in resources.
Added in 3 hours 33 seconds:
Added in 3 days 19 hours 6 minutes 55 seconds:
I saw an ad for some program that creates A.I. employees to cover many roles and tasks in a business.
"
@NyxandMomLive
5 days ago
Good cut the dead weight. I worked for the government and I did my job and the work of the 2 girls sitting behind me. My boss called me into the office after 3 weeks told me and I’m quoting “people here are afraid you are after their jobs because you do to much. You work for the government now you need to slow down. If my boss finds out 1 person can run the entire front end they’ll cut my funding and since you’re low man on the totem pole you’ll get laid off. I will be stuck with the 2 that don’t have your work ethic.” Straight up truth to my face I’ve never ever been told I worked to hard. I shortly transferred to a university thinking it would be better it was worse. If you’re looking for an easy job that requires you pretend to work I urge you to apply for a job in the government. True story!!!
"
Added in 2 days 1 hour 49 minutes 30 seconds:
Added in 5 days 20 hours 11 minutes 45 seconds:
Added in 2 days 7 hours 24 minutes 21 seconds:
Added in 4 days 18 hours 32 minutes 49 seconds:
https://www.bbc.com/news/articles/cm2w3d2jjw0o
What I believe threatens armed gangs trying to control the lives of everyone else by any means necessary is independence in any way, in thinking, in resources.
Added in 3 hours 33 seconds:
Added in 3 days 19 hours 6 minutes 55 seconds:
I saw an ad for some program that creates A.I. employees to cover many roles and tasks in a business.
"
@NyxandMomLive
5 days ago
Good cut the dead weight. I worked for the government and I did my job and the work of the 2 girls sitting behind me. My boss called me into the office after 3 weeks told me and I’m quoting “people here are afraid you are after their jobs because you do to much. You work for the government now you need to slow down. If my boss finds out 1 person can run the entire front end they’ll cut my funding and since you’re low man on the totem pole you’ll get laid off. I will be stuck with the 2 that don’t have your work ethic.” Straight up truth to my face I’ve never ever been told I worked to hard. I shortly transferred to a university thinking it would be better it was worse. If you’re looking for an easy job that requires you pretend to work I urge you to apply for a job in the government. True story!!!
"
Added in 2 days 1 hour 49 minutes 30 seconds:
Added in 5 days 20 hours 11 minutes 45 seconds:
Added in 2 days 7 hours 24 minutes 21 seconds:
Added in 4 days 18 hours 32 minutes 49 seconds:
https://www.bbc.com/news/articles/cm2w3d2jjw0o
- kFoyauextlH
- Posts: 950
- Joined: Sun Jun 15, 2025 3:53 pm
Re: AI - is it really intelligent?
https://www.forbes.com/sites/larsdaniel ... t-have-to/
https://thefinanser.com/2025/06/is-ai-making-us-stupid
https://tech.co/news/another-study-ai-making-us-dumb
https://www.newscientist.com/article/25 ... -about-it/
https://www.theguardian.com/technology/ ... technology
"
crawling-alreadygirl
•
6mo ago
Don't quote me on this, but I've read that this is the reason for rapid eye movement in sleep: if our eyes were completely shut off 1/3 of the time, other processes would start to colonize the parts of the brain used for visual processing
44
in sleep: if our eyes were completely shut off 1/3 of the time, other processes would start to colonize the parts of the brain used for visual processing
"
Lol, I quoted them on it.
"
u/beepuboopu_aishiteru avatar
beepuboopu_aishiteru
•
6mo ago
I use Chat GPT to write professional dick-sucking emails to the client. I never liked doing it, and now I don't have to do it again. It saves me a ton of time and stress making sure I've got the wording just right so my micro-managing boss doesn't send me an email about how to better dick-suck in an email. I don't care if my brain forgets how to write this shit. Good riddance.
37
"
These shouldn't be the jobs or the workers in those jobs.
"
GenericFatGuy
•
6mo ago
I do the same with cover letters. They're a bullshit and outdated contrivance. Bullshit letters for bullshit postings for bullshit jobs.
Otherwise though, I'd rather use my brain.
14
"
"
payasosagrado
•
6mo ago
I feel this - if anything we should be asking ourselves why our communications have been so “empty” and devoid of meaning - like the emails we have to send our bosses, interface style communication with coworkers, etc. we forget as humans we built this mundane world and mediocre task to hold up some toxic strange form of civilization in the first place.
1
u/beepuboopu_aishiteru avatar
beepuboopu_aishiteru
•
6mo ago
Agreed. Corporations have favored standardizing messaging and removing personality in fear of what that expression may inspire, no matter how small.
2
"
We? I didn't do any of this.
https://thefinanser.com/2025/06/is-ai-making-us-stupid
https://tech.co/news/another-study-ai-making-us-dumb
https://www.newscientist.com/article/25 ... -about-it/
https://www.theguardian.com/technology/ ... technology
"
crawling-alreadygirl
•
6mo ago
Don't quote me on this, but I've read that this is the reason for rapid eye movement in sleep: if our eyes were completely shut off 1/3 of the time, other processes would start to colonize the parts of the brain used for visual processing
44
in sleep: if our eyes were completely shut off 1/3 of the time, other processes would start to colonize the parts of the brain used for visual processing
"
Lol, I quoted them on it.
"
u/beepuboopu_aishiteru avatar
beepuboopu_aishiteru
•
6mo ago
I use Chat GPT to write professional dick-sucking emails to the client. I never liked doing it, and now I don't have to do it again. It saves me a ton of time and stress making sure I've got the wording just right so my micro-managing boss doesn't send me an email about how to better dick-suck in an email. I don't care if my brain forgets how to write this shit. Good riddance.
37
"
These shouldn't be the jobs or the workers in those jobs.
"
GenericFatGuy
•
6mo ago
I do the same with cover letters. They're a bullshit and outdated contrivance. Bullshit letters for bullshit postings for bullshit jobs.
Otherwise though, I'd rather use my brain.
14
"
"
payasosagrado
•
6mo ago
I feel this - if anything we should be asking ourselves why our communications have been so “empty” and devoid of meaning - like the emails we have to send our bosses, interface style communication with coworkers, etc. we forget as humans we built this mundane world and mediocre task to hold up some toxic strange form of civilization in the first place.
1
u/beepuboopu_aishiteru avatar
beepuboopu_aishiteru
•
6mo ago
Agreed. Corporations have favored standardizing messaging and removing personality in fear of what that expression may inspire, no matter how small.
2
"
We? I didn't do any of this.
