AI at its worst
The company behind ChatGPT and other
OpenAI, the San Francisco-based software company, states that its GPT-40 product can reason across audio, vision, and text in real time. We see it as a technology takeover of human interactions, both on a personal and on a social level. Even their own AI system acknowledges the dangers it poses to human interactions, while the number of ChatGPT users is steadily increasing.
We witness that the efforts to create policies against the use of ChatGPT in education and in the workplace are largely failing. A new generations is growing up with a lessened ability to write, read and develop refined social interaction skills. The use of ChatGPT enables the replacement of a range of clerical, administrative, artistic and even management jobs, and can provide a quantitative advantage for certain business, benefiting both the platform’s power users and OpenAI.
The company’s other products, such as the image-generator DALL·E 3 and the text-to-video generator Sora promote the ease of generating results seemingly similar to those of professional painters, graphic artists and video creators. We see this as a devaluation of art and disrespecting artists. The models that such systems are trained on are questionable in terms of their curation, equability, IP ownership, and lack of clear processes for creators to opt-out from the use of their artwork, let alone royalty claim processes.
We conclude that the social, cultural, and economic damages far outweigh the limited, subjective benefits of OpenAI’s platforms. We suggest that, despite the company’s conspicuously touted safety programs, its ability to control the autonomous development and misuse of their own tools is lacking – ironically confirmed by statements of its own ChatGPT.
to be featured soon
More actively Pro-AI / anti-ahumanistic humanistic entities will be highlighted soon.
Powering the takeover
In their own words, NVIDIA’s “work in AI and digital twins is transforming the world’s largest industries and profoundly impacting society.” What this syrup-y statement doesn’t elaborate on is the profound, long-term negative impact on society.
A hardware and software company headquartered in Santa Clara, CA, is as close as any business can get to a real-life Skynet today. While this might sound like a cliche gloom & doom proposition, the NVIDIA website describing CEO Jensen Huang’s conversation topics (“the role of generative AI in building virtual worlds, and virtual worlds for building the next wave of AI and robots”) speaks volumes about the intent and ambition behind NVIDIA’s activities.
The company proudly touts its hardware AI supercomputer technology reaching 100 million new users in two months, and puts AI Factory building know-how as one of their primary offerings, as they see the future as all companies using AI factories. As they state: “NVIDIA is the engine of the world’s AI infrastructure.” Their programs also include “AI-accelerated GPU designs for generative AI applications”. However, they do not seem to show any concern for the entire industry segments and their workers’ jobs these activities wipe out, nor the fair (?) practices they use to acquire their training data, let alone the effective guardrails (?) they’d set up to minimize the misuse of their computing systems.
Hiding behind incomparably low-priority climate- and pharmaceutical research initiatives, nature-influenced office designs and the CEO’s (supposedly) relatable personal style of clothing and direct communication, in reality the company is what embodies the concept of Generative AI: “Good for Business, Bad for the Individual”.