I spoke at AI for the Rest of Us last week. This new 2-day event happened in London, and wanted to help attendees understand what AI is about and how to use it, in plain language. Some talks also considered potential issues, including bias and ethics.
The mornings had various keynotes, then the afternoon split into 3 tracks, some on Thursday and some on Friday:
- Back to Basics: AI Fundamentals
- AI in Action: Real World Stories and Lessons Learnt
- AI in the Workplace: Productivity Game Changers
- Business Strategy Meets AI: Embrace Change and Unlock Opportunity
- Developer Experience: How AI is Changing Software Engineering
- Responsible AI: Safety, Security, Guardrails and Governance
I was speaking at AI fundamentals, and tried to explain curve fitting, why it's sometimes called regression, and neural networks in under 30 minutes. A challenge, but people talked to me afterwards so I may have succeeded.
If you don't know why we sometimes use the word regression here's a clue. Francis Galton wrote a paper called "Regression towards Mediocrity in Hereditary stature." Wikipedia has quite a good overview. He observed the heights of children and parents, and spotted they all tended towards a median value. Nothing that profound really, though there is a bias loitering in there. Anyway, the talks were recorded and my talk is now on YouTube.
I have copious notes, but some highlights were Rachel Lee-Nabors reminding me of Russell and Novig's book, Artificial Intelligence: A Modern Approach. I read that when I started my PhD many years ago. I probably read an older edition but Rachel suggested the 4th edition. It covers a lot of ideas and is a great introduction if you want a good book. Rachel also mentioned Trask's Grokking Deep Learning book. Rachel's title was "AI cram session", explaining lots of details about LLMs and more besides.
Several talks tried to explain LLMs, going into various levels of detail. Ideas from embedding to attention and multilayer perceptrons were covered. These are fundamental parts of LLMs, so no surprise there, for me anyway. The talk is here.
Of course, there's more to AI than LLMs. I really enjoyed Lisa Becker's talk "Beyond LLM-Washing: When other ML models are simply better".
She pointed out the potential huge cost of using LLMs and encouraged us to at least consider other methods, like classification or clustering, along with predictive models or anomaly detection. She pointed out using LLMs and GenAI tends to have prestige, but might not be practical. The talk is here.
I also enjoyed Lianna Potter's talk "Never Neutral: AI Development, Past, Present and Future in Anthropological Research". She describes herself as a digital anthropologist.
She talked through Diana E. Forsythe's book, Studying those who Study Us. The bias that creeps in when people focus on the wrong things never stops! I've not come across this book before, but it's now on a long list of books to read. Her talk is here.
Ian Miell's talk about LLMs in the humanities was interesting. He set himself the challenge to get AI to write an essay for him that he had to write at university. He also talked through some AI history, including machines to play chess. The details of his personal project to generate an essay were relatively high level, but easy to follow. His talk is here.
I also went to Jeff Watkin's talk "Four Horsemen of the Information Apocalypse." He started by asking who invented
- the light bulb
- the printing press
- the internet
These are often wrongly answered. This was a great lead in to talking about misinformation, mal-information, disinformation and non-information. He drew an analogy with a virus spreading and encouraged us to lower the r number, to stop the spread, or at least reduce it. His talk is here.
I went to several other talks too, but the different afternoon tracks meant I missed many. I look forward to these turning up on the internet. The speakers' brief was to keep things understandable. That's hard if you don't know the audience, but I think the speakers managed. Some technical terms were used, but analogies or simple explanations were offered.
I'm glad I went. I met new people and had lots of interesting conversations. Having speaker's drinks the night before meant I recognised a few people when I turned up on the first morning, which was lovely.
You can access the recordings via the YouTube playlist. Again, they all aim to be understandable, even if you don't know anything about how AI works.
No comments:
Post a Comment