1. https://appdevelopermagazine.com/artificial-intelligence
  2. https://appdevelopermagazine.com/ai-generated-work-from-managers-is-damaging-trust-among-employees/
3/25/2026 8:21:21 AM
AI generated work from managers is damaging trust among employees
Ai Trust At Work,Manager Ai Use,Employee Confidence,Workplace Transparency,Generative Ai Policy,Responsible Ai,Internal Communications,Change Management,Performance Reviews,Content Quality,Human In The Loop,Disclosure Standards,Governance Framework,Data Privacy,Legal Compliance,Brand Reputation,Risk Management,Employee Engagement,Manager Training,Productivity Metrics,Trust Measurement
/AI-Generated-Work-From-Managers-AppDeveloperMagazine_8zmcgtew.jpg
App Developer Magazine
AI generated work from managers is damaging trust among employees

Artificial Intelligence

AI generated work from managers is damaging trust among employees


Wednesday, March 25, 2026

Russ Scritchfield Russ Scritchfield

More than half of US employees report receiving low quality AI generated work from managers, with most saying it damages trust in leadership and increases skepticism about AI while exposing major gaps in workplace training and standards.

Across industries, employees are noticing a shift in how work is created and presented by their managers. Generative AI tools are now part of the managerial toolkit, shaping emails, presentations, performance notes, and strategic updates. While AI can speed up drafting and analysis, a growing number of team members feel unsettled when work that influences careers, pay, and priorities appears to be created by software without clear disclosure. The outcome is a trust gap that should concern any organization that values accountability and healthy culture.

What The Findings Mean For Employers

Trust is the foundation of effective leadership. Employees expect their managers to bring judgment, context, and accountability to the work that guides teams. When AI shaped content arrives without clarity on what is human created and what is machine generated, that expectation weakens. People start to ask whether the guidance truly reflects a managers understanding. They also question whether their own contributions will be fairly assessed if AI is shaping the criteria, the review, or the story shared with senior leadership.

Why Employees Feel Misled

Three drivers are most common. First is lack of disclosure. If team members learn after the fact that AI wrote a performance summary or a project plan, they can feel deceived even if the material is accurate. Second is quality drift. AI text can sound confident while containing gaps in logic or awareness of local context. Employees who must correct these gaps lose time and patience. Third is fairness. People fear that AI generated materials will substitute for direct observation, leading to decisions that are detached from real work.

The Line Between Assistance And Authorship

AI can be a helpful assistant. It can outline options, summarize long threads, and propose first drafts. But employees draw a line at authorship of materials that carry managerial voice and authority. When a manager uses AI for ideation or structure, then adds judgment, examples, and decisions, the result still feels like leadership. When a manager outsources the narrative and adds only a quick review, it feels like abdication. Stating where the line is for your organization is essential, and then living by it is even more important.

Practical Steps To Rebuild Confidence

Start with plain language disclosure. Give managers a simple formula for noting AI involvement in emails, briefs, and plans. Encourage managers to say what the tool assisted with and to affirm that they have reviewed and take responsibility for the content. Build a flow that requires human verification of facts, names, data, and commitments. Give teams a shared rubric for when AI can be used and when it should not be used, including anything that evaluates performance, pay, or sensitive employee information.

Policy And Governance That Employees Can Believe In

Publish a short policy that employees can read in a few minutes. Explain approved tools, permitted use cases, and red lines such as private data, health information, or legal claims. Describe how prompts and outputs are stored and who can access them. State how the company will audit usage and address issues. Create a channel for employees to ask questions and report concerns without fear of retaliation. Pair the policy with manager training that focuses on real scenarios from your workplace, not abstract examples.

project meeting

Measuring Trust Without Guesswork

Track trust the way you track revenue or uptime. Use periodic pulse surveys with simple, repeatable questions about clarity, fairness, and confidence in managerial communication. Monitor the rework rate of AI assisted materials by counting corrections, clarifications, or withdrawals. Watch for patterns in grievances, attrition among high performers, and internal referrals. Share the trends with employees and explain the actions you will take. Transparency about measurement builds credibility, especially when you follow through and show progress.

Making Disclosure A Habit

Trust grows when disclosure is routine, not rare. Offer ready to use language that managers can add to messages and documents. Provide templates in collaboration tools that include a short note about AI assistance and human review. Encourage leaders to narrate their process in meetings, such as explaining that a draft began with an AI outline and was then expanded with examples from the teams own work. Over time, this simple practice normalizes responsible use and reduces speculation.

Supporting Managers To Use AI Well

Managers need guidance that goes beyond tool tips. Teach them how to craft prompts that reflect company voice and values, how to evaluate AI outputs for bias or missing context, and how to fact check efficiently. Show how to adapt AI suggestions to the needs of different audiences, from executives to front line teams. Coach managers to prioritize listening, walk the floor, and integrate insights from direct observation. AI can improve speed, but it cannot replace the relationship work that trust requires.

Risks To Brand And Legal Readiness

Poorly governed AI use can expose a company to reputational or legal risk. Confident but inaccurate updates can mislead stakeholders. Unlabeled AI content can prompt public backlash if discovered. Use of sensitive data in prompts can violate privacy commitments. To reduce risk, require source citations in background research, maintain a record of significant AI assisted decisions, and run periodic reviews with legal, privacy, and security partners. When in doubt, choose slower accuracy over faster output.

What Employees Want To Hear

Employees are not rejecting technology. They are asking for clarity, accountability, and respect. They want to know when AI is used, what human oversight looks like, and how their work is evaluated fairly. They want AI to remove low value tasks so managers have more time for coaching and recognition. They also want a voice in how policies evolve. Invite them to contribute to guidelines, help test new workflows, and identify moments where human judgment is essential.

A Simple Path Forward

Set a clear standard for disclosure. Train managers on practical use and common pitfalls. Measure trust and share results. Involve employees in policy updates. Reward leaders who model transparency and sound judgment. These steps do not require large budgets or complex systems. They require consistent behavior and a willingness to explain how work is made. When leaders do that, AI becomes another tool in service of people, not a wedge that separates them.






Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here



Featured Stories


Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines
Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines Tuesday, April 14, 2026


AccuWeather Launches ChatGPT Integration for Live Weather Updates
AccuWeather Launches ChatGPT Integration for Live Weather Updates Tuesday, April 14, 2026


Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance
Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance Tuesday, April 14, 2026




IT spending rises as banks balance legacy and innovation
IT spending rises as banks balance legacy and innovation Monday, April 13, 2026


Tech hiring slumps as Software Developer job postings fall
Tech hiring slumps as Software Developer job postings fall Monday, April 13, 2026


AI is becoming more widespread in collaboration tools
AI is becoming more widespread in collaboration tools Thursday, April 9, 2026


FCC prohibits new foreign router models citing critical infrastructure risks
FCC prohibits new foreign router models citing critical infrastructure risks Thursday, April 9, 2026


ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds
ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds Monday, April 6, 2026


Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes
Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes Tuesday, March 31, 2026


Accelerating corporate ai investment returns
Accelerating corporate ai investment returns Monday, March 30, 2026


Stay Updated

Sign up for our newsletter for the headlines delivered to you

SuccessFull SignUp

Get More App News



/sites/themes/prod/assets/js/less.js"> ' ' %>