- Author: Mark Pesce | Futurist & Author
- Posted: October 30, 2023
Are CFOs Ready for AI?
The question echoing through boardrooms and leadership teams is simple, yet daunting: “Are we ready for AI?” As organisations rush to harness the potential of artificial intelligence, CFOs must embrace its potential, but also confront the formidable challenges and unchartered risks it presents. Futurist, Mark Pesce deep dives into the rapidly evolving world of AI and shares insights and strategies to better understand the risks and opportunities in a age where everyone wants to harness the power of AI.
As you read these words, Microsoft has been stealthily upgrading all of its nearly half a billion Windows 11 PCs to use something called ‘Windows Copilot’. Announced with fanfare by Microsoft CEO Satya Nadella at the end of May, Windows Copilot integrates a very powerful AI chatbot into the Windows 11 operating system. After the upgrade, a pretty technicolor icon appears on the Windows 11 taskbar – click on it, and a window opens the Windows Copilot AI chatbot. Built out of the best bits of ChatGPT, it can do much more than answer trivia questions or do some light writing: Windows Copilot integrates into Windows, so it can work with your files, system settings and so forth. Over the next year, as Microsoft adds Copilot to all of its apps – with Office, OneDrive and OneNote next on the list – artificial intelligence will thread its way through every aspect of our PCs.
That sounds wonderful – but back here on Earth we still have a incomplete understanding of how these powerful AI systems work, no guidance on how to use them safely, and very little experience on how to use them wisely. This means that there’s quite a bit of scope for naïeve users to use Windows Copilot in ways that seem smart but may be quite dangerous.
For example, AI chatbots can be fed long documents, then asked for a concise summary of their contents, listing the major points, and so forth. That’s something I’ve found very useful this year, as I worked my way through a mountain of scientific papers describing advances in the field of AI. But what happens when someone uploads a commercial-in-confidence report on the state of the business for a summary? Or a worksheet of financial statements that could have far-reaching impacts if they were made public outside of ASIC guidelines? These are the sorts of actions that are very easy to perform with Windows Copilot, yet most people do not understand that when they’ve uploaded a document or worksheet to Windows Copilot that it’s being sent off to Microsoft’s cloud of servers – possibly on the other side of the world. Microsoft retains a copy (which they’ll use to help train Windows Copilot to give better answers) and it’s unclear where else that information might surface. It should never have left the building, but the person who took the action had no way to anticipate the consequences. That illustrates where we are with AI right now, and you feel tempted to dismiss this as a hypothetical example, in August Crikey reported that personel at the Department of Defence were uploading documents to ChatGPT for analysis – without any thought to potential security implications.
Most businesses haven’t taken a look at how AI may already be used by their employees. It could be that IT policies on office machines keep those PCs secure – but what about the machines folks use when working from home a few days a week? Do they have the same safeguards? Do employees know when they can and can not use AI chatbots in their work? An audit would be revealing, and should be done soon. it could be as simple as an email survey, and will provide results that can be used to develop a division-wide or even an organisation-wide policy.
Policy is a leadership-led area, because leadership has to answer some fundamental questions about how the organiation sees itself and its future in a world where AI is rapidly going everywhere and into everything. What is the role of AI in your workflow? When is it permissable to use AI? When is it a good idea to use AI? When should it be avoided or forbidden? Policy becomes the foundation for the development of clear guidelines that provide organisational ‘guardrails’ around the appropriate use of AI.
Policy helps organisations understand when they can use AI – but does your organisation have procedures in place to use it well? Nearly everyone who uses an AI chatbot has learned by playing with it. While a fantastic way to get comfortable, that leaves users poorly served when it comes to the things they shouldn’t do – such as uploading private or sensitive data – and also leaves them underserved by a growing body of technique.
Every organisation using AI would be well-served by performing research to document best practices in the use of AI in their own domains of expertise; the needs of a financial professional differ greatly from that of an engineer or solicitor – yet each can greatly benefit from the wise use of AI. How do those needs shape themselves into procedures? Who in your organisation has mastered those procedures? Have those procedures been documented? Do you have a framework in place so that employees can mentor one another in using AI?
While policies will likely remain stable over the next months and years, procedures will remain a moving target. We know only a little of the capabilities of AI chatbots, and we know next to nothing about what sorts of new workflows will emerge as people incorporate AI chatbots into their normal work practices. Procedures will change monthly – sometimes weekly – as we learn new and better ways of working with AI. That sounds a bit chaotic, and it will occasionally feel that way over the next few years; the gain for that pain will be vastly more productive organisations that have mastered the latest and greatest AI capabilities by deeply integrating them into their workflow.
Finally, what happens when things go wrong? What are the protocols that employees and organisations need to observe when they’ve actioned information generated by a ‘hallucinating’ AI chatbot spewing out ‘fake news’? What do you do when a document uploaded for analysis contains a hidden ‘payload’ designed to instruct the AI do something dangerous or harmful? Both of these situations already occur today, and will continue to occur into the foreseeable future. AI may be powerful – but it’s far from perfectly reliable. To use this new power means wearing some new responsibilities – organisations need to have protocols in place to negotiate the moments when AI goes wrong.
At the beginning of this year, the widespread deployment of AI chatbots appeared at least a few years away. In reality, it’s only taken a few months. Microsoft – with Windows Copilot – and Google – with Bard – already have theirs in wide release. By the end of this year, Meta will be adding AI chatbots to Facebook Messenger, Instagram and WhatsApp. More than three billion people will be using AI chatbots – inside and outside of the organisation; in the office and at home. We aren’t well prepared for this, but there’s still time to pull our heads in, have a good think, and then move confidently, safely, and wisely into a world where everything has suddenly grown very smart.