Planview Blog

Your path to business agility

Artificial Intelligence, Vision and Trends

Harnessing AI Responsibly: Our Pro-technology, Pro-worker Vision

Learn why 19th-century Luddites can be some of the best teachers on how to get AI right.

Published By Dr. Richard Sonnenblick
Harnessing AI Responsibly: Our Pro-technology, Pro-worker Vision

In the early 19th century, a group of English textile artisans — known as the Luddites — embarked on a series of vigorous protests. Often misrepresented as anti-technology, their cause was instead deeply rooted in the human need for fair wages and sustainable work conditions.  

History recounts that these artisans spared manufacturers who embraced the latest technologies but ensured their workforce earned a living wage and were treated respectfully. Standing on the precipice of today’s artificial intelligence revolution, we find an uncanny parallel to the Luddite’s chapter in history.  

As Chief Data Scientist at Planview, I believe we can be pro-technology while remaining staunchly pro-human, implementing AI on behalf of our customers as a force for good.  

We can reap the benefits of AI through its responsible development and use by focusing on the following three core principles: ensuring accuracy and unbiased technologies, respecting privacy and intellectual property through judicious data usage, and leveraging AI not to replace — but to augment — skilled labor. 

Ensuring Accuracy and Unbiased Technologies 

Bias and inaccuracy in AI models can result in detrimental outcomes, ranging from misguided business decisions to unfair treatment. It is our collective responsibility as technologists to mitigate such risks.  

AI technologies must be accurate and impartial to serve their intended purpose and to maintain public trust. 

At Planview, we ensure accuracy and impartiality with bias identification and bias correction. Identification is carried out through comprehensive testing, much like quality assurance in traditional software development. We test AI models using diversified datasets that reflect the real world’s complexity, making sure they work accurately and fairly across different scenarios and demographics. 

Similarly, bias correction is not a one-time effort. It’s an iterative process that begins with transparency about the AI system’s design, purpose, and limitations. Model interpretability plays a crucial role in this process. Through explainable AI, we can trace how a model arrived at a specific conclusion, enabling us to identify and rectify bias in the decision-making process. 

Respecting Privacy and Intellectual Property through Judicious Data Usage 

Data is the fuel that powers AI. However, the use of data must respect the rights of individuals and organizations. Privacy and intellectual property rights are non-negotiable for us. 

We work diligently to ensure that our training data is compiled responsibly. In practice, this means anonymizing personal data, obtaining necessary consents, and strictly adhering to international regulations like GDPR.  

In many cases, we run machine-learning algorithms within our own computer resources to shield sensitive data. In the context of generative AI, where models managed by third parties such as Microsoft and OpenAI may provide best-in-class performance, we precede use with a careful audit of the third parties’ privacy and security practices.  

By adhering to these principles, we ensure that our AI technologies are built on a solid foundation of respect and trust. 

Augmenting Skilled Labor, Not Replacing It 

AI’s potential is awe-inspiring. Yet, as technologists, it is incumbent upon us to ensure it serves humanity.  

Our technologies are designed to handle repetitive and mundane tasks, improve productivity, and enable our teams and customers to focus on the more creative, strategic, and inherently “human” endeavors that underpin all successful, sustainable businesses. 

In strategic portfolio management, for instance, AI can streamline processes (such as plan creation and timesheet completion), predict project timelines, and optimize resources. It can analyze vast amounts of data swiftly, providing insights that help managers make more informed decisions.  

However, AI cannot replace the leadership, creativity, and human touch that product owners and project managers bring to their teams and projects. We strive to ensure our AI tools empower our users, making their jobs easier and more productive, not rendering them obsolete. 

Pursuing a Responsible, Ethical, Inclusive Technology Landscape 

Much like the Luddites of the 19th century, our pursuit favors a responsible, ethical, and inclusive technology landscape that uplifts everyone it touches.  

By ensuring our AI technologies are accurate and unbiased, respecting privacy and intellectual property, and focusing on augmenting rather than replacing human capabilities, we work towards creating a digital ecosystem that echoes the spirit of fair work conditions championed by the Luddites. 

As we aggressively invest in the future of AI at Planview, we are inspired by the lessons of the Luddites.

The Luddites were not technophobes; they were protectors of their craft, their livelihoods, and their dignity. Similarly, as technologists, we must safeguard our ethics, our users, and our society in the era of AI. 

We remain committed to engaging in meaningful discussions about AI’s implications on our work and society. And we look forward to hearing your feedback as we weave responsible AI into the Planview Platform

Related Posts

Written by Dr. Richard Sonnenblick Chief Data Scientist

Dr. Sonnenblick, Planview’s Chief Data Scientist, holds years of experience working with some of the largest pharmaceutical and life sciences companies in the world. Through this in-depth study and application, he has successfully formulated insightful prioritization and portfolio review processes, scoring systems, and financial valuation and forecasting methods for enhancing both product forecasting and portfolio analysis. Dr. Sonnenblick holds a Ph.D. and MS from Carnegie Mellon University in Engineering and Public Policy and a BA in Physics from the University of California, Santa Cruz.