reflections on how ChatGPT HAS changed things

The good, the bad, and the evil.

Searching for information

I consider myself pretty proficient in searching for information on the internet. The trick is that I’m used to breaking down the query into smaller and precise sub-keywords, also with some query syntax like “AND”, “OR”, “MUST(NOT) INCLUDE” etc. The catch is that in the past search engines do not answer (nor do they understand) questions, instead they query a large collection of data. They are just databases with sophisticated query and ranking functionalities.

This has really changed in recent years. Now you can actually ASK questions in the searching engines – I don’t mean you take the AI generated answers. The searching engines (like…eh…google) actually give you good HUMAN-GENERATED results based on your questions written in natural language – the engines breaks down your questions into precise queries for you.

So I find myself more often throwing “how to do xyz so that abc” or “what is xyz” into the search box, if the question is not too technical or niche.

GPT as a primer to completely new knowledge

I don’t really trust ChatGPT answers and if I had to use it I do double-fact-checks in the old fashion afterwards. In most case this is annoying – why can’t I just do it in the old fashion?

But there is one extremely nice use case: what if you don’t know the terminology to begin with? Like, you know in a certain field there must be a certain term for a thing that has property xyz and does abc. But you don’t know how to precisely describe it in a searching engine friendly way.

So here is what I do: for something totally new to me, I describe the thing in natural language and ask chatGPT to give me related terminologies and keywords, then I proceed to search for those keywords in the old school way. In most cases I’m happy with the result.

It makes searching both easier and harder.

This looks like a self-contradicted take but I’ll explain. First let me make it clear: in the previous point, when I’m using AI as assistant, I’m using it to find actual human generated answers in the end. But the AI-generated answers themselves, are bad, at least for now and the near future.

Floods of Non-Information at best
The AI-generated answers are mostly to answer questions in a general sense, for example “reboot your system” when you are asking a computer problem. And many websites populate their content with such answers. When you search for a computer issue, the first 10 pages of “10 ways to solve xyz” won’t help you. In most cases the only way you can solve your problem is to find someone who has the same issue and has solved it on some forum. The mis-utilization of AI makes finding the later much much harder. The problem is that the volume of text is populating exponentially with the introduction of AIs but the volume of knowledge and information is not.

Misinformation at worst
LLMs are always “confident” in whatever they generate, and they put all answers as if they were facts. I won’t go further in this topic - there are countless reports of GPT

SEO and AI
Some websites are using AI to to SEOs. I mean, SEO is disgusting already. AI only make it worse: the ranking of results in the searching engine no longer represents the relevance.

The feedback(WIP)
What if the AI takes AI-generated results as input? It has been a practice to let AI train it self, but that’s not exactly the case with LLMs.

And I’m making a bold prediction that as time goes the LLMs will be made retarded by itself.

Translations, writing and language learning.

Machine translated texts has always been easy to spot because they are in most cases too superficial and literal, and they fail to catch the context. And I never enjoyed reading those.

This has changed as well. I often use AI tools to help writing things and checking the wordings, especially when it’s formal. The advancement is that LVMs are based on actual human languages as training data. While they often fail to be factual, they utilize the language really well. Perhaps LLMs “understand” the languages (technically, like grammars and vocabularies) better than linguistics?

Entertainments (WIP?)

This is something I really hate. I don’t want to be cynical but I honestly don’t think the people are ready for it.

This is a time when people get addicted to tik-tok videos and fragmented information, many people . Actually the content farms have been doing this forever - it’s the same garbage and AI only makes the garbage generation more proliferate.

But who am I to comment on what people enjoy?

Apart from content farms (or, self-entitled medias). There is also a trend of using AI-generated in those ROI oriented shitty games.

and some observations

Btw, in the uni library. I’m seeing maybe 5 out of 10 screens with a ChatGPT prompt…

edited 29.01.2024
created 12.01.2024
EOF
[+] click to leave a comment [+]
the comment system on this blog works via email. The button
below will generate a mailto: link based on this page's url 
and invoke your email client - please edit the comment there!

[optional] even better, encrypt the email with my public key

- don't modify the subject field
- specify a nickname, otherwise your comment will be shown as   
  anonymous
- your email address will not be disclosed
- you agree that the comment is to be made public.
- to take down a comment, send the request via email.

>> SEND COMMENT <<