ADVANCE 2023 with the ADAPT Centre

7th June 2023

Posted by Learnovate

GENERATIVE AI – RUNNING TO STAND STILL

The recent ADVANCE 2023 forum organised by ADAPT, a Science Foundation Ireland-funded research centre, took a deep dive into Generative AI. The half-day, in-person event (held in Trinity College Dublin on June 7th) was aimed at anyone involved in or interested by AI research and innovation. It featured an impressive list of speakers and very much took an ‘under the hood’ look at AI and Generative AI in particular.

In the first keynote, Professor of Computer Science at DCU Anya Belz gave a fascinating and informative talk on the inner workings of Language Models (LMs) in which she signaled the arrival of what are called Transformer LMs as a breakthrough moment in the history of AI. A Transformer LM is a neural network that learns context and thus meaning by tracking relationships in sequential data such as the words in a sentence. The first Transformer LM was developed by researchers at Google and presented in a seminal research paper entitled ‘Attention Is All You Need’ published in 2017. By changing the way AI engines acquire meaning Transformer LMs have allowed AI to mimic human reasoning thus paving the way for the Generative AI revolution we are currently witnessing.
Professor Belz also highlighted the importance of Prompt Engineering as a means to improve the capacity of LMs to undertake different and more complex tasks such as arithmetic reasoning. Despite its technically sounding name, Prompt Engineering is more about carefully crafting prompts (using precise vocabulary and verbs) that expose weaknesses in the Generative AI LM and improve its overall performance, accuracy, and reliability.
Furthermore, she explained that a particular type of prompting, called ‘Chain of Thought’ prompting, represented the current ‘gold standard’ for building reliable Generative AI LMs. Rather than dealing with an incorrect answer from the AI engine, ‘Chain of Thought’ prompting focuses on the intermediate reasoning steps in order to improve the chances of the correct answer being generated. In essence, ‘Chain of Prompting’ focuses on how the AI is working out an answer with a view to constantly improving its ‘reasoning’ over time.

In a second, complimentary, keynote on the Opportunities, Challenges and Future Directions of Generative AI, Professor of Computer Science at Maynooth University John Kelleher talked about the on-going (indeed never-ending) trade-off between accuracy and creativity in the world of Generative AI. We need to find a way to build AI engines that strike an acceptable balance between these two competing demands.
Professor Kelleher spoke about the range of tools and techniques that can be used to ‘fine-tune’ AI models. These include Reinforcement Learning with Human Feedback (RLHF) – a technique where the AI engine generates a variety of answers to a prompt, and these are then evaluated by a human. Another technique called Parameter Efficient Fine-Tuning (PEFT) involves identifying a small set if key parameters that can be incrementally ‘tweaked’ to improve the performance and accuracy of the LM.
He also spoke about a number of open-source datasets that are available to those wishing to build their own Large Language Model. The leading example of this is the Falcon 40B Model so-called because it boasts forty billion parameters that the user can adjust and customize. The Falcon 40B Model has been pre-trained using content gathered from web crawls (using resources like Common Crawl), research papers and even social media conversations. 

In addition to the (very high quality) keynotes, the event featured a number of other presentations and panel discussions exploring different areas of the Generative AI landscape. These included a discussion on Data, Ethics and AI Regulation the panel for which included Patricia Scanlon, Ireland’s AI Ambassador. In the course of the discussion, the difficulties of regulating such a rapidly changing environment were laid bare

So, what where the key takeaways from the event?
With the arrival of generative language models, the world of AI has become a frighteningly fast-moving landscape. Everything that is now said about AI needs the caveat ‘at this point in time.’
Mitigating against the negative aspects of Generative AI (hallucinations, bias, etc.) requires a sound understanding of the architecture on which it is built.
Maintaining any control over the direction of Generative AI will be hugely challenging but is an absolutely essential task.

1 https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
2 https://www.businessinsider.com/prompt-engineering-ai-chatgpt-jobs-explained-2023-3?r=US&IR=T
3 https://www.promptingguide.ai/techniques/cot

4 https://towardsdatascience.com/harnessing-the-falcon-40b-model-the-most-powerful-open-source-llm-
f70010bc8a10

Become a Member

Become part of a global community of leaders in the future of learning and the future of work. Join Learnovate today! Contact membership@learnovatecentre.org to find out more.