I recently wrote on Threads about how the "magical" nature of AI is incredibly worrying when you stop to think deeper about it.


I had some extra thoughts that I think warranted a somewhat longer musing.


It's first prudent, I think, to define exactly how this issues manifests currently. If you recall, in the early days of ChatGPT - there used to be a small portion at the bottom of the prompting page that said something along the lines of: ChatGPT makes mistakes, check important info (and the version number). If you clicked on the link - it took you to the webpage for updates related to the models development.

That exact same portion on the webpage, currently, reads:
ChatGPT can make mistakes. Check important info.


This also does not contain a hyperlink. I mention this because, for the average person who is not familiar with an extensive amount of tech news and know-all - if they did not actively seek out more information on ChatGPT - they would have zero knowledge of OpenAI as a company and what they do.


Infact, there is currently zero reference, on any part of ChatGPT - to OpenAI as a company.

My foremost problem with this is that OpenAI was founded as a research company, so as far down the line in terms of "making a product" they go - there actual roots as a research arm does not change (although, with the news of them moving to a for-profit model, the previous statement holds less water). Irrespective though, the lack of association between ChatGPT and it's creators make the "magic" of it far more pervasive on a user, detaching it from any actual company. Without any knowledge of the inner workings of the technology, and also no knowledge of who created it - the vast majority of people are deceived by the development of the product, making them blind to any unsafe development in the future.


I want to underscore that the absence of the link between the company and the product on the webpage is actually not the norm at all, even if we look at ChatGPT as a product first (which, as explained earlier, should not be the case). If you go to google.com - there are several links that will take you to more knowledge on Google as a company. There is a "About" page, and a literal "How Search Works" page that explains, in shockingly understandable language, some foundations of search.


This argument may seem trivial, but the "humanization" of the company creating products is, in my opinion, really critical to ensuring that safe development actually occurs. If you can connect the product to real people building it, with actual explanations - you are far less likely to assume the technology is beyond your understanding, and that somehow the technology has a far greater ability to affect you than you have to wield it to your use.


That last line in the previous paragraph is particularly dangerous when it comes to AI, where, if we start the development of AI with humanity with the notion that most humans are unable to understand the inner workings of it, and that we should accept it is just "magical" things that are beyond the capacity of someone with a P.H.D, I would be scared for what the future holds. Of course, understanding the deep technicalities of the product, does, indeed - require a lot of know how. But that is not a sufficient reason to keep the development of it, and at the very least - access to more information - out of the front page of the product.


The future of AI with humans should be far more symbiotic that where it looks like we are heading right now - which seems to be more parasitic than anything else.