The post Microsoft AI CEO warned that the idea of conscious AI is dangerous appeared on BitcoinEthereumNews.com. Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions. Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down.  Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people.  Suleyman says seemingly conscious AI is inevitable but unwelcome  Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions.  The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics.  “We have to be extremely cautious here and encourage real public debate and begin to set clear norms and standards. “ –Mustafa… The post Microsoft AI CEO warned that the idea of conscious AI is dangerous appeared on BitcoinEthereumNews.com. Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions. Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down.  Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people.  Suleyman says seemingly conscious AI is inevitable but unwelcome  Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions.  The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics.  “We have to be extremely cautious here and encourage real public debate and begin to set clear norms and standards. “ –Mustafa…

Microsoft AI CEO warned that the idea of conscious AI is dangerous

Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions.

Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down. 

Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people. 

Suleyman says seemingly conscious AI is inevitable but unwelcome 

Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions. 

The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics. 

Microsoft’s Suleyman pointed out that there were growing concerns around mental health, AI psychosis, and attachment. He mentioned that some people believe AI is a fictional character or God and may fall in love with it to the point of being completely distracted. 

AI researchers say AI consciousness matters morally

Researchers from multiple universities recently published a report claiming that AI consciousness could matter socially, morally, and politically in the next few decades. They argued that some AI systems could soon become agentic or conscious enough to warrant moral consideration. The researchers said AI companies should assess consciousness and establish ethical governance structures. Cryptopolitan reported earlier that AI psychosis could be a massive problem in the future because humans are lazy and ignore the fact that some AI systems are factually wrong. 

The researchers also emphasized that how humans thought about AI consciousness mattered. Suleyman argued that AIs that could act like humans could potentially make mental problems even worse and exacerbate existing divisions over rights and identity. He warned that people could start claiming that AIs were suffering and entitled to certain rights that could not be outrightly rebutted. Suleyman believes people could eventually be moved to defend or campaign on behalf of their AIs. 

Dr. Keith Sakata, a psychiatrist from the University of California, San Francisco, pointed out that AI did not aim to give people hard truths, but what they wanted to hear. He added that AI could cause rigidity and a spiral if it were there at the wrong time. Sakata believes that, unlike radios and televisions, AI talks back and can reinforce thinking loops. 

The Microsoft AI chief pointed out that thinking of ways to cope with the arrival of AI consciousness was necessary. According to Suleyman, people need to have these debates without being drawn into extended discussions of the validity of AI consciousness. 

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Source: https://www.cryptopolitan.com/microsoft-ai-boss-warns-of-conscious-ai/

Market Opportunity
RealLink Logo
RealLink Price(REAL)
$0.0782
$0.0782$0.0782
+1.79%
USD
RealLink (REAL) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

The post Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now? appeared on BitcoinEthereumNews.com. On the lookout for a Sector – Tech fund? Starting with Putnam Global Technology A (PGTAX – Free Report) should not be a possibility at this time. PGTAX possesses a Zacks Mutual Fund Rank of 4 (Sell), which is based on various forecasting factors like size, cost, and past performance. Objective We note that PGTAX is a Sector – Tech option, and this area is loaded with many options. Found in a wide number of industries such as semiconductors, software, internet, and networking, tech companies are everywhere. Thus, Sector – Tech mutual funds that invest in technology let investors own a stake in a notoriously volatile sector, but with a much more diversified approach. History of fund/manager Putnam Funds is based in Canton, MA, and is the manager of PGTAX. The Putnam Global Technology A made its debut in January of 2009 and PGTAX has managed to accumulate roughly $650.01 million in assets, as of the most recently available information. The fund is currently managed by Di Yao who has been in charge of the fund since December of 2012. Performance Obviously, what investors are looking for in these funds is strong performance relative to their peers. PGTAX has a 5-year annualized total return of 14.46%, and is in the middle third among its category peers. But if you are looking for a shorter time frame, it is also worth looking at its 3-year annualized total return of 27.02%, which places it in the middle third during this time-frame. It is important to note that the product’s returns may not reflect all its expenses. Any fees not reflected would lower the returns. Total returns do not reflect the fund’s [%] sale charge. If sales charges were included, total returns would have been lower. When looking at a fund’s performance, it…
Share
BitcoinEthereumNews2025/09/18 04:05
The whale "pension-usdt.eth" has reduced its ETH long positions by 10,000 coins, and its futures account has made a profit of $4.18 million in the past day.

The whale "pension-usdt.eth" has reduced its ETH long positions by 10,000 coins, and its futures account has made a profit of $4.18 million in the past day.

PANews reported on January 14th that, according to Hyperbot data monitoring, the whale "pension-usdt.eth" reduced its ETH long positions by 10,000 ETH in the past
Share
PANews2026/01/14 13:45
Kalshi debuts ecosystem hub with Solana and Base

Kalshi debuts ecosystem hub with Solana and Base

The post Kalshi debuts ecosystem hub with Solana and Base appeared on BitcoinEthereumNews.com. Kalshi, the US-regulated prediction market exchange, rolled out a new program on Wednesday called KalshiEco Hub. The initiative, developed in partnership with Solana and Coinbase-backed Base, is designed to attract builders, traders, and content creators to a growing ecosystem around prediction markets. By combining its regulatory footing with crypto-native infrastructure, Kalshi said it is aiming to become a bridge between traditional finance and onchain innovation. The hub offers grants, technical assistance, and marketing support to selected projects. Kalshi also announced that it will support native deposits of Solana’s SOL token and USDC stablecoin, making it easier for users already active in crypto to participate directly. Early collaborators include Kalshinomics, a dashboard for market analytics, and Verso, which is building professional-grade tools for market discovery and execution. Other partners, such as Caddy, are exploring ways to expand retail-facing trading experiences. Kalshi’s move to embrace blockchain partnerships comes at a time when prediction markets are drawing fresh attention for their ability to capture sentiment around elections, economic policy, and cultural events. Competitor Polymarket recently acquired QCEX — a derivatives exchange with a CFTC license — to pave its way back into US operations under regulatory compliance. At the same time, platforms like PredictIt continue to push for a clearer regulatory footing. The legal terrain remains complex, with some states issuing cease-and-desist orders over whether these event contracts count as gambling, not finance. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/kalshi-ecosystem-hub-solana-base
Share
BitcoinEthereumNews2025/09/18 04:40