The perils of GenAI student submissions
Generative AI (GenAI) systems, such as ChatGPT,
can help students as their personal tutor,
by allowing them to study what interests them,
by providing in depth explanations to topics they didn’t quite understand,
by assessing their work and problems with it, and
by providing shortcuts to parts of their work that aren’t directly relevant
to what they want to learn.
However, students sometimes
misuse GenAI
to derive answers for work they were supposed to conduct on their own as part
of their learning,
or accept its answers uncritically.
For the first type of misuse part of the blame occasionally
also lies with educators
for giving out-of-class assignments that GenAI can perform with ease.
For the second type of misuse students must learn to avoid using
unverified GenAI output.
Needless to say that in both cases the misuse of AI may also constitute
academic fraud and violate their university’s code of conduct.
Here is my take on the practicalities of the two cases.
Continue reading "The perils of GenAI student submissions"Last modified: Friday, April 11, 2025 6:31 am
How AGI can conquer the world and what to do about it
We have seen many calls warning about the existential danger
the human race faces from artificial general intelligence (AGI).
Recent examples include the letter asking for a six month pause in
the development of models more powerful than GPT-4
and
Ian Hogarth’s FT article calling for a slow-down in the AI race.
In brief, these assert that the phenomenal increase in the
power and performance of AI systems we are witnessing
raises the possibility that these systems will obsolete humanity.
I’ve already
argued that some of the arguments made are hypocritical,
but that doesn’t mean that they are also vacuous.
How credible is AGI’s threat and what should we do about it?
Continue reading "How AGI can conquer the world and what to do about it"Last modified: Friday, April 14, 2023 9:12 pm
The hypocritical call to pause giant AI
The recent
open letter calling for a pause in giant AI experiments
correctly identifies a number of risks associated with the development of AI, including job losses, misinformation, and loss of control. However, its call to pause some types of AI research for six months smacks of hypocrisy.
Continue reading "The hypocritical call to pause giant AI"Last modified: Thursday, March 30, 2023 8:15 pm
AI deforests the knowledge’s ecosystem
Big-tech’s dash to incorporate ChatGPT-like interfaces into their search engines threatens the ecosystem of human knowledge with extinction. Knowledge development is a social activity. It starts with scientists publishing papers and books that build on earlier ones and with practitioners, journalists, and other writers disseminating these findings and their opinions in more accessible forms. It continues through specialized web sites, blogs, the Wikipedia, as well as discussion and Q&A forums. It further builds upon our interactions with these media through web site visits, upvotes, likes, comments, links, and citations. All these elements combined have yielded a rich global knowledge ecosystem that feeds on our interactions to promote the continuous development of useful and engaging content.
Continue reading "AI deforests the knowledge’s ecosystem"Last modified: Thursday, March 16, 2023 3:18 pm
Installing PyTorch on a Raspberry Pi-3B+ redux
This is an update to articles for installing the
PyTorch machine learning library
on a Raspberry Pi that have been published by
Amrit Das in 2018
and
Saparna Nair in 2019.
It builds on them by updating the required settings and introducing a fix
and a few tweaks to make the process run considerably faster.
Although there are Python wheels floating around that offer PyTorch
as a Raspberry Pi Python package,
downloading them from unverified sources is a security risk.
Here’s how to install PyTorch from source.
Continue reading "Installing PyTorch on a Raspberry Pi-3B+ redux"Last modified: Tuesday, March 17, 2020 5:19 pm