Fireside

Fireside chat with Dr. Tanmay Rajpurohit

logo

Ameet Deshpande: Given your expertise in both the technical and legal aspects of LLMs, what is your opinion on the progress made on the legislation and policy surrounding AI?

Dr. Tanmay Rajpurohit: Progress is steady, but lags behind rapid innovation. Commercialization and safety concerns raise the urgency, a burden companies must bear, not researchers. The question remains: how much do companies prioritize safety resources compared to training? Finally, legislation struggles to keep pace due to a knowledge gap, lawyers don’t understand AI and AI researchers do not understand law.

Ameet: Who should the onus for safe interaction with AI systems be on?

Tanmay: While the license shifts the safety onus onto the users, stricter enforcement of legislation is needed. This offloading of further safety measures to third-party companies, without allocating sufficient resources, is a concerning trend. It appears to be a way to evade responsibilities and conveniently shift blame.

Ameet: With access to powerful LLMs now ubiquitous, do you think AI generated content might hinder elections like in India and US that are coming up?

Tanmay: Age of information overload! GenAI is a tool that anyone can use to create a ton of content. If fake content goes viral, it can certainly affect elections, perhaps worse than in previous elections. That is perhaps just one angle.

Ameet: There seem to be very basic solutions for AI safety like outright refusal when the LLMs suspect the answer could be toxic, or hallucination to ensure it gives an answer, but not related to the query. Why are companies not able to come up with better solutions?

Tanmay: Four major issues exist:
Hallucinations: There isn’t a clear-cut technical solution to address hallucinations generated by large language models.
Privacy: There are significant privacy concerns. Even CTOs (Chief Technology Officers) of these companies may not fully understand what data is being used to train the models! The recent case of Mira Murati highlights the seriousness of this issue.
Copyright: Copyright issues arise when the models generate text that may infringe on copyrighted material.
Bias Bias is a fundamental problem. The way these models are trained using personas and simulations can introduce bias. Current methods to fix bias, like applying band-aid solutions, can slow down the models (increase latency) and make them less useful (decrease utility). This creates a complex engineering challenge on top of the safety risks posed by bias.

Ameet: What is your opinion on the seriousness with which companies are treating AI safety?

Tanmay: The way various stakeholders are invested in LLM technology impacts the reason safety is being ignored. From a technical perspective, AI safety is being ignored because mathematics is being ignored. On top of that, the law lacks teeth to prevent companies from selling LLM technology to the public or other companies.

Ameet: On a similar note, what is your opinion on how legislative stakeholders are treating this?

Tanmay: Legislators are doing a good job so far, especially with initiatives like executive orders and the EU Act. Companies are at least starting to talk about AI safety, a shift from last year’s complete evasiveness, particularly after the OpenAI fiasco. Perhaps, we will see the development of smaller, customized models instead of a one-size-fits-all approach. While resources may not be being fully allocated yet, at least there’s some dialogue happening.