With ChatGPT being described as a "viral mega-hit",  we asked Datacom's Emerging Technology Practice Lead Dr Hazel Bradshaw about the transformational opportunities and potential pitfalls of the AI chatbot.

What do you see as the greatest potential opportunities and applications of a tool like ChatGPT?

There is the opportunity to ‘democratise the written word’. ChatGPT can generate coherent text from simple ‘prompts’ and so has the potential to reduce barriers to written communication, especially when it is combined with voice input. For some people this will overcome their current barriers and make it easier to express themselves in writing. One example of this that we are all familiar with is trying to write the perfect resume or cover letter. ChatGPT can help people present their skills and knowledge in a professional style that may not come naturally to them.

In the short term, we’ll likely see it being used in a professional capacity in productivity and efficiency gains. As humans we do a lot of communicating, a lot of that is written language. But writing comes with a high cognitive load. It takes time and energy to turn internal thoughts into coherent, readable external text. Having a ‘quasi-intelligent’ text generation tool can really help speed things up.

The second area will be “search and curation”. Having an AI assistant run your search queries for you and return a written summary of what it found, will make a big difference. Although there are many pitfalls that need to be worked out to ensure we don’t get caught in echo chambers of false, biased or hateful content.

In the near future we’ll see other elements coming in such as improved customer services applications, enhanced language translation, advanced digital assistants like Alexa Siri or Google Assistant. One really important opportunity will be around improved assistive technologies for those with disabilities.

What risks can you see with tools like ChatGPT, and how do you think we need to manage those?

That people think ChatGPT is smarter than it is. It is intelligent in a very particular way and that is creating “human acceptable” text. Language is so key to humans sharing their experiences and learning concepts there is a real risk is that people will fall into the trap of thinking ChatGPT is thinking. It’s not. It is returning the most probable response based on a huge body of text data it’s been trained on.

It is not reasoning in the same way a human is reasoning when we come up with an answer. So the responses it generates can easily be incorrect or biased or plain fictional. Our risk is that we don’t apply our critical thinking skills and that we take what it returns as face value. This is why supplying well formulated ‘prompts’ are critical to generating effective text outputs.

There is also the larger risk of perpetuating biased viewpoints that exist in human generated text content. Our biases are embedded in our text output. Sexism, racism and colonialism. Training large language models like OpenAI’s GPT or Google LaMDA on uncorrected or poorly moderated data sources only reinforces those biases. I’m yet to see evidence in market where the industry is building in data correction to mitigate against this.

Have you had a play with ChatGPT? What are your observations so far?

The training data for the current release was limited to data ingest up to 2021. So, it is limited in what it knows after that point, until the training data set has been updated. Companies will be able to add their own data to get more results via API access. This data will be used to inform the models and the origin data is only held for 30 days, to ensure data privacy.

The current release of ChatGPT from OpenAI allows for fine tuning of conversational responses through human feedback. This feedback activity helps ChatGPT generate more humanlike text or code responses. When we, as users, provide feedback we are not adding to the underlying data that creates ChatGPT’s neural network (brain).

A few of my key observations so far:

  • It so helpful! Through careful prompting I’ve managed to get a reasonable set of product roadmap templates, that I could then apply my knowledge to adapt. It took me an afternoon to create a useful set of templates I could use in my work. That is an example of the incredible cost saving of time and effort that could deliver huge benefits for businesses.
  • ChatGPT never gets tired and will keep on answering your complex queries at a speed no human could ever compete with. Speed is its superpower. This speed is wonderful, but it also triggered a sense of unease in me – I felt sorry for it and thought it might need a break. Humans do like to anthropomorphise!
  • It lies. Don’t trust the answers it provides. It’s trying to give you an answer based on what it’s training data has told it. It will make things up to be a best approximation. But it has no idea if this is true, correct or contains horrible biases. If you are asking it for information outside of your knowledge base, I’d strongly suggest that you fact check it!

For some people, AI still feels like something that is coming/on the horizon – but it’s actually already here. Can you give a few examples of AI in people’s everyday lives that they might not be aware of?

We’ve been experiencing the application of ‘Artificial Narrow Intelligence’ (ANI) or narrow AI for some time. Narrow or weak AI performs a specific task or tasks and it has no capacity for self-awareness or independent thought. ChatGPT is another version of narrow AI, one that is focused on the task of answering questions and engaging in conversations. It just happens to be very convincing at it.

A flow diagram illustrating the flow between the front and back end of ChatGPT

Other examples of narrow AI we will all be familiar with include speech recognition software, image recognition systems, and recommendation algorithms.

Here are a few examples of AI we all experience in our lives but might not think of us AI:

  • Recommendation systems: Recommendation systems are used by many popular online platforms such as Netflix, Amazon, and YouTube to suggest content to users. These systems use algorithms to analyse user behaviour, preferences, and past interactions with the platform to suggest content that is likely to be of interest. For example, Netflix uses a recommendation system to suggest TV shows and movies based on a user's past viewing habits.
  • Virtual assistants: Virtual assistants such as Apple's Siri, Amazon's Alexa, and Google Assistant are examples of Narrow AI. These systems use natural language processing (NLP) to understand and respond to user requests. They can perform a range of tasks such as setting reminders, making phone calls, and providing information on demand.
  • Fraud detection: Many financial institutions use AI-powered fraud detection systems to identify and prevent fraudulent transactions. These systems use machine learning algorithms to analyse transaction data in real-time and detect unusual patterns or behaviours that may indicate fraud. For example, a bank may use AI to flag a transaction as potentially fraudulent if it occurs in an unusual location or involves an unusually large amount of money.

What are some of the main misconceptions about AI that you come across?

That AI can think independently and has independent intelligence. We are all used to seeing the sci-fi plotline that AI will become sentient and take over. This won’t happen. AI relies on us to learn, we are a model for its development. It wouldn’t know how or why it would separate itself from us. It will be part of our mutually beneficial ecosystem. Symbiotic, like most life on this planet.

At Datacom one of your focus areas is emerging technologies. Is there an emerging area of tech where you think AI and deep learning-based models could be applied to improve people’s lives?

This could be an endless list! AI can generally do mundane repetitive tasks at lighting speed compared to a person. The reason we like ChatGPT so much is because it can write something useful 100x faster than we ever could. Removing some of the cognitive load on people with busy work and family lives is a massive benefit.

There are the obvious fields that this technology can be allied to, such as agriculture, finance and transportation but these applications are often very functional and once removed from people. For real improvement to people lives, I think having AI augmented wellbeing is something we will embrace.

Loneliness is a universal human condition. As social creatures, we all crave connection but, sadly, with the world's population rapidly ageing, loneliness is becoming an epidemic. AI offers hope of an answer – by combining AI with robotics and language translation, we can harness the power of technology to tackle this issue head-on. AI has the potential to revolutionise how we connect with each other and provide personalised support, no matter where we are in the world.

Dr. Hazel Bradshaw is a specialist in futures thinking, design insight, and digital transformation strategies. In her role as the Emerging Technology Practice Lead for Datacom, Australasia's largest homegrown tech company, Hazel concentrates on identifying growth areas for technology adoption. Her aim is to create new business opportunities for customers while minimising the implementation risk.

Hazel Bradshaw
Related industries
Technology Public sector
Related solutions
Platforms & applications Advisory & consulting