Elon Musk, Steve Wozniak and a group of AI experts along with the executives of Industry had come forward demanding for an interval of six months to develop an AI system, which is more powerful than GPT-4, launched by Open AI and in the open note they mentioned all the possible risks and uncertainties that can affect the society. The recommended gap should look after all the ways that can assist the development of AI more safely, trustworthy, accurate, interpretable, aligned, loyal and trustworthy. Governance systems of AI are also being created alongside while working with the Policymakers of the industry.
In the beginning of March, an OpenAI which was sponsored by Microsoft team had announced the fourth replication of its Generative Pre-trained Transformer (GPT)AI program, that programme impressed the users by engaging them in human-like discussions, conversations, composition of songs and summarizing prolonged documents.
The Future of Life Institute(FLI) issued a letter stating that the Powerful Artificial intelligence (AI) systems should be established only once and it also said that they are confident about the positive effects and manageable risks to society.
The Artificial Intelligence laboratories and self supporting experts should make use of this break to coordinatively develop and implement a group of safety agreements for advanced design and development of Artificial Intelligence that are strictly examined and supervised by independent experts.
Critics blamed the Future of Life Institute, which was initially funded by the Musk Foundation. They imagined the scenarios of apocalyptic over the really quick concerns about the Artificial Intelligence which included biases on racist or sexist which has been programmed into the devices.
Elon Musk has expressed his frustration about the critical regulators and the efforts that regulate the autopilot system which the regulatory body to ensure that AI developed by them serves the society.
A recent post of Open Artificial intelligence(Open AI) stated that it is important to get the independent review at some point before any future training is initiated and the advanced efforts limit the growth rate by creating the new models for the computation.
On the commencement of new financial year, an AI researcher ‘Timnit GebruI’ said that we should be really scared of the whole thing related to the AI in the current scenario.
Gebru mentioned that, “We should always understand the damages and harms that can possibly happen before we spread something all over the society, and we should try to reduce those problems before we spread the information out”.
Some people are worried that ChatGPT will start overflowing in social media with articles that resonate more professional, or cover up with letters that sounds authentic.