If you follow project management news and experts, you've probably heard or read various opinions regarding the potential impact of AI and machine learning.

Even the Association of Project Management's latest report 'The Adaptive Project Professional', suggests that 'learning and understanding new technology' should be a key focus for project professionals.

But who and what should you believe about how AI could really change the way your business manages change?

Along with The Lazy Project Manager, Peter Taylor, we recently invited project professionals to attend a webinar titled 'AI: the end of Project Management as we know it?' and pose their questions to Sharktower's CEO Craig Mackay.

Unsurprisingly, the attendees didn't hold back with their questions, and we've compiled five of them here, along with the answers and some key quotes from the discussion.

You can watch the full webinar below, or simply click on a question to read the answer.


 

Q1. How can AI help remove human bias?

Q2. Can AI help project managers make the right decisions at the right time?

Q3. How difficult are AI tools to implement and what are the dangers?

Q4: Does project management software rely on considerable data collection?

Q5. Will project schedules and risks predicted during the pandemic be the same post-pandemic?

 


 

Q1. How can AI help remove human bias?

“I'm very careful when I talk about human bias because it’s open to subjectivity. But, ultimately, we all want to do the right thing.

The point is, every project includes people with completely different purposes and incentives. It varies of course, but you might have internal business owners, an internal project team, a consultancy and a vendor."

 

"Projects fail because people's egos come before the desire to get the project completed." - Tatyana Duffie, IT & Business Change Project Manager

 

"So everyone is going to view things differently, and feel under pressure to protect their part of the process. So, what we want to do is get to a point where everybody can work together as best as possible. For that, we need to remove some of the human bias and subjectivity that comes from continually asking people to report on their own performance.

That’s when you can start to use qualitative project health scores that look at all the data - at cost management, velocity, team sentiment - and combine them to calculate a real, quantitative score. That score can augment what we are hearing and seeing and being told, and help project managers know where to look."

Reporting-blog-PO2

Sharktower interprets various data to create a quantitative project health score

 

 

Q2: Can AI help project managers make the right decisions at the right time?

“If you consider the evolution of AI in project management as presented in a recent paper by PWC, the first phase is simply the automation of workflows and data gathering, and that allows us to make decisions faster. AI uses the same data we’re currently using today to make decisions - it’s just hard to get! It’s hard to access quickly enough, it's hard to copy, and it's hard to analyse. Even simple AI allows you to access data faster, which means you can make decisions more quickly.

And then you start to look at how the data can help you make the right decisions. That's where it becomes more advanced, with things such as Monte Carlo simulations and predictive models which can simulate complex sensitivity scenarios to say what’s going to happen based on a specific set of factors across the whole project supply chain and dependency network.

So first it’s about getting insights and data faster to enable quick decision-making, then it’s about making the right decisions, which becomes more complex and involves some of the modelling work."

 

“AI will automate and simplify the decision making based on predefined or self-learning rules, however, it will shift the work to the rule creation and analysis area” - Brad Myers, Programme Manager, Smart DCC

 

Q3. How difficult are AI tools to implement and what are the dangers?

"In its simplest form, bot technology is really easy to implement. There's loads of it, and it's almost a consumer product these days. You can deploy it on your own desktop, and it simply learns from you - it's not doing anything and it’s not making any decisions. It’s just saying, ‘I've seen you do this thing 10 times before, should I do it for you next time?’

So automating that ‘copy and paste’ work is the first step, and immediately you’ve freed up some time.

The potential pitfalls of AI can occur when you start to automate big, complex decision-making processes, work allocation or performance management.

For example, in a previous role I was tasked with reviewing productivity of certain colleagues in an airport. Nothing sophisticated there, but the analytics indicated that colleagues in a certain region of weren’t as productive as colleagues everywhere else. Would you assign those employees to critical projects? No! BUT, further investigation showed that the poor performance was nothing to do with the people. It was the fact that these individuals had to travel to work across a bridge which was constantly congested (or closed altogether) and, as a result, they consistently had the worst attendance.

Now, if you put that information into a machine learning model that automatically allocates resources to critical work, it would never pick those people. So the AI isn't wrong, but still requires human interpretation.

In another example related to resource allocation for a consultancy firm, automation kept getting the balance of resources wrong. Because for a consultancy to work, you need to team up inexperienced people with experienced people, otherwise you have no succession planning, and it’s difficult to run programmes like graduate schemes.

So those two examples illustrate some of the dangers you need to be aware of when machine learning models are making decisions for you, especially when it comes to people and people-related situations. For years to come, we’ll need human intervention to make sure decisions are right and to interpret what the models are showing us. That’s why we support the models in Sharktower with explainable AI. Instead of just showing outputs in charts, we explain why the issue is happening and what's causing it, so you can investigate rather than simply assume the machine learning model is right."

Explainability_caption-1024x746-Aug-10-2020-10-22-59-08-AM

Sharktower's 'Explainable AI' indicates why problems may be occurring, enabling project managers to investigate.

 

Q4: Does project management software rely on considerable data collection?

“It depends. If you want absolute answers, and by that I mean if you want insights from your predictive analytics or machine learning models to be absolute, then it requires intensive data collection, lots of data and lots of cleansing that data and getting it right. And that's going to be almost impossible because we know our historical project data hasn’t been very structured, or of great depth, nor of a good quality, and so it’s not trustworthy.

So even if you had the best machine learning models in the world, if you put all your bad project data into it, you're going to get the wrong answer and people often see that as a barrier. But I don't think it should stop us from using AI technologies. You start to learn from your data fast, as long as you use it with the right intention. To that end, the models we have in Sharktower are indicators, not absolute answers. But they tell you where you need to go to look for the issues.

As a project manager, you can then investigate and see if what’s indicated is true. It highlights things you perhaps didn't see, and then hopefully you can mitigate the unseen risk. And then over time, because you're becoming more data driven - and the data is more transparent and open - the quality will get better and better."

 

"Even if you had the best machine learning models in the world, if you put all your bad project data into it, you're going to get the wrong answer." - Craig Mackay, CEO of Sharktower

 

Q5. Will project schedules and risks predicted during the pandemic be the same post-pandemic?

"There are a lot of things we can’t predict accurately in the world at the moment. But by having rapidly learning technologies on top of your data, you can quickly learn how well you're estimating projects and how your completion rates are changing. And then you can rapidly respond to those changes.

One particularly useful aspect of AI right now, while teams are working remotely, is applying Natural Language Processing to understand team sentiment.

 

Team Sentiment - new May 2020What Sharktower does is look at all the text that exists within projects - the way people have written work, how they've commented, how they've written a status report, even the way they’ve defined an activity. We look at it across the lexicon of project language to see through the project nuance and understand sentiment or engagement of teams.

When teams become disengaged, that’s probably the biggest risk to any project. You're going to have problems, and it’ll be hard to recover from them. So we look at that across the projects and that's really useful, especially now we’ve all got teams and clients that have moved from working together in the same office to sitting on Zoom calls.

As project managers, we know that, generally, stress and disengagement occur when teams are overloaded or the project outcome is ambiguous. That’s when performance drops and slippages can occur. So being able to pre-empt that can mean the difference between project success or failure."

 

Is AI the end of Project Management as we know it?

Watch the full webinar and Q&A session here.

 

See Sharktower in action!

For a personalised demo of how AI-driven project management could change the way you manage change in your business, REQUEST A DEMO or drop us an email at info@sharktower.com