PodParley PodParley

Ep. 33 - October 26, 2023

Episode 33 of the TechcraftingAI NLP podcast, hosted by Brad Edwards, titled "Ep. 33 - October 26, 2023" was published on October 27, 2023 and runs 51 minutes.

October 27, 2023 ·51m · TechcraftingAI NLP

0:00 / 0:00

arXiv research summaries for Computation and Language from October 26, 2023. You can find summaries and links to each article ⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠. Today's research themes (AI summary) Advances in large language models through fine-tuning, prompting, and quantization. Examples include HuggingFace LLaMA, custom prompting for ChatGPT, and quantizing models down to 4 bits. Analyzing capabilities and limitations of large language models on reasoning, commonsense, and factual knowledge tasks. Papers probe model knowledge in areas like theory of mind, spatial reasoning, and stumpers. Improving natural language processing applications with techniques like contrastive learning, targeted pretraining, and reinforcement learning from human feedback. Areas include intent detection, question answering, summarization. Evaluating bias, fairness, transparency, and ethics of large language models and datasets. Analyzing issues like occupational stereotypes, licensing and attribution of training data, and fairness-explainability tradeoffs. Enabling multilinguality, cross-linguality, and transfer learning. Examples include cross-lingual transfer for low-resource languages, code embeddings across programming languages, and continual learning for multilingual ASR.

arXiv research summaries for Computation and Language from October 26, 2023. You can find summaries and links to each article ⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠.


Today's research themes (AI summary)

  • Advances in large language models through fine-tuning, prompting, and quantization. Examples include HuggingFace LLaMA, custom prompting for ChatGPT, and quantizing models down to 4 bits.
  • Analyzing capabilities and limitations of large language models on reasoning, commonsense, and factual knowledge tasks. Papers probe model knowledge in areas like theory of mind, spatial reasoning, and stumpers.
  • Improving natural language processing applications with techniques like contrastive learning, targeted pretraining, and reinforcement learning from human feedback. Areas include intent detection, question answering, summarization.
  • Evaluating bias, fairness, transparency, and ethics of large language models and datasets. Analyzing issues like occupational stereotypes, licensing and attribution of training data, and fairness-explainability tradeoffs.
  • Enabling multilinguality, cross-linguality, and transfer learning. Examples include cross-lingual transfer for low-resource languages, code embeddings across programming languages, and continual learning for multilingual ASR.

No similar episodes found.

No similar podcasts found.

URL copied to clipboard!