Tuesday 24 December 2024
 
»
 
»
Story

Daryl Plummer

One billion people to be 'affected by digital addiction' by 2028

DUBAI, October 25, 2024

Gartner predicts that by 2028, about one billion people will be affected by digital addiction, which will lead to decreased productivity, increased stress and a spike in mental health disorders such anxiety and depression.
 
Also, through 2026, 20% of organisations will use AI to flatten their organisational structure, eliminating more than half of current middle management positions, said Gartner in its top strategic predictions for 2025 and beyond. 
 
Organisations that deploy AI to eliminate middle management human workers will be able to capitalise on reduced labour costs in the short-term and long-term benefits savings, Gartner said. 
 
Enhanced productivity
AI deployment will also allow for enhanced productivity and increased span of control by automating and scheduling tasks, reporting and performance monitoring for the remaining workforce which allows remaining managers to focus on more strategic, scalable and value-added activities. 
 
At the same time, AI implementation will present challenges for organisations, such as the wider workforce feeling concerned over job security, managers feeling overwhelmed with additional direct reports and remaining employees being reluctant to change or adopt AI-driver interaction. Additionally, mentoring and learning pathways may become broken, and more junior workers could suffer from a lack of development opportunities. 
 
By 2028, technological immersion will impact populations with digital addiction and social isolation, prompting 70% of organisations to implement anti-digital policies.
 
Additionally digital immersion will also negatively impact social skills, especially among younger generations that are more susceptible to these trends. 
 
Impact of AI
“It is clear that no matter where we go, we cannot avoid the impact of AI,” said Daryl Plummer, Distinguished VP Analyst, Chief of Research and Gartner Fellow. “AI is evolving as human use of AI evolves. Before we reach the point where humans can no longer keep up, we must embrace how much better AI can make us.” 
 
“The isolating effects of digital immersion will lead to a disjointed workforce causing enterprises to see a significant drop in productivity from their employees and associates,” said Plummer. “Organisations must make digital detox periods mandatory for their employees, banning after-hour communication and bring back compulsory analog tools and techniques like screen free meetings, email free Fridays, and off-desk lunch breaks.”
 
By 2029, 10% of global boards will use AI guidance to challenge executive decisions that are material to their business. 
 
AI-generated insights will have far-reaching impacts on executive decision making and will empower board members to challenge executive decisions. This will end the era of maverick CEOs whose decisions cannot be fully defended. 
 
“Impactful AI insights will at first seem like a minority report that doesn’t reflect the majority view of board members,” said Plummer. “However, as AI insights prove effective, they will gain acceptance among executives competing for decision support data to improve business results.” 
 
By 2028, 40% of large enterprises will deploy AI to manipulate and measure employee mood and behaviours, all in the name of profit.
 
Sentiment analysis
AI has the capability to perform sentiment analysis on workplace interactions and communications. This provides feedback to ensure that the overall sentiment aligns with desired behaviors which will allow for a motivated and engaged workforce.
 
“Employees may feel their autonomy and privacy are compromised, leading to dissatisfaction and eroded trust,” said Plummer. “While the potential benefits of AI-driven behavioral technologies are substantial, companies must balance efficiency gains with genuine care for employee well-being to avoid long-term damage to morale and loyalty.” 
 
By 2027, 70% of new contracts for employees will include licensing and fair usage clauses for AI representations of their personas.
 
Large language models
Large language models (LLMs) that emerge have no set end date which means employees' personal data that is captured by enterprise LLMs will remain part of the LLM not only during their employment, but after their employment. 
 
This will lead to a public debate that will question whether the employee or employer has the right of ownership of such digital personas, which may ultimately lead to lawsuits. Fair use clauses will be used to protect enterprises from immediate lawsuits but will prove to be controversial. 
 
By 2027, 70% of healthcare providers will include emotional-AI-related terms and conditions in technology contracts or risk billions in financial harm.
 
The increased workload of healthcare workers has resulted in workers leaving, an increase in patient demand and clinician burnout rates which is creating an empathy crisis. Using emotional AI on tasks such as collecting patient data can free up healthcare workers’ time to alleviate some of the burnout and frustration they experience with increased workload. 
 
By 2028, 30% of S&P companies will use GenAI labeling, such as “xxGPT,” to reshape their branding while chasing new revenue.
 
New revenue streams
CMOs view GenAI as a tool that can launch both new products and business models. GenAI also allows for new revenue streams by bringing products to market faster while delivering better customer experiences and automating processes. As the GenAI landscape becomes more competitive, companies are differentiating themselves by developing specialised models tailored to their industry.
 
By 2028, 25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors.
 
New security and risk solutions will be necessary as AI agents significantly increase the already invisible attack surface at enterprises. This increase will force enterprises to protect their businesses from savvy external actors and disgruntled employees to create AI agents to carry out nefarious activities. 
 
“Enterprises cannot wait to implement mitigating controls for AI agent threats,” said Plummer. “It’s much easier to build risk and security mitigation into products and software than it is to add them after a breach.” 
 
By 2028, 40% of CIOs will demand “Guardian Agents” be available to autonomously track, oversee, or contain the results of AI agent actions.
 
Enterprises’ interest in AI agents is growing, but as a new level of intelligence is added, new GenAI agents are poised to expand rapidly in strategic planning for product leaders. “Guardian Agents” build on the notions of security monitoring, observability, compliance assurance, ethics, data filtering, log reviews and a host of other mechanisms of AI agents. Through 2025, the number of product releases featuring multiple agents will rise steadily with more complex use cases.
 
New threat surface
“In the near-term, security-related attacks of AI agents will be a new threat surface,” said Plummer. “The implementation of guardrails, security filters, human oversight, or even security observability are not sufficient to ensure consistently appropriate agent use.” 
 
Through 2027, Fortune 500 companies will shift $500 billion from energy opex to microgrids to mitigate chronic energy risks and AI demand. 
 
Microgrids are power networks that connect generation, storage and loads in an independent energy system that can operate on its own or with the main grid to meet the energy needs of a specific area or facility. 
 
This will create competitive advantage for day-to-day operations and derisk energy in the future. Fortune 500 companies who spend some of their operating expenses on energy should consider investing in microgrids which will offer a better return than continuing to pay rising utility bills.--TradeArabia News Service
 



Tags:

More Miscellaneous Stories

calendarCalendar of Events

Ads