PodParley PodParley

Ep. 254 - Part 2 - June 4, 2024

An episode of the TechcraftingAI NLP podcast, hosted by Brad Edwards, titled "Ep. 254 - Part 2 - June 4, 2024" was published on June 5, 2024 and runs 35 minutes.

June 5, 2024 ·35m · TechcraftingAI NLP

0:00 / 0:00

ArXiv NLP research for Tuesday, June 04, 2024. 00:20: Description Boosting for Zero-Shot Entity and Relation Classification 01:44: Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning 03:09: Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor 04:30: Prompting Large Language Models with Human Error Markings for Self-Correcting Machine Translation 05:41: mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models 06:53: Technical Language Processing for Telecommunications Specifications 08:09: On Affine Homotopy between Language Encoders 09:25: Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering 10:32: Probing the Category of Verbal Aspect in Transformer Language Models 11:58: Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection 13:03: LlamaCare: A Large Medical Language Model for Enhancing Healthcare Knowledge Sharing 14:33: Retaining Key Information under High Compression Ratios: Query-Guided Compressor for LLMs 15:51: On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept 17:30: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data 19:08: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding 20:07: Representations as Language: An Information-Theoretic Framework for Interpretability 21:32: Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding 22:46: Hiding Text in Large Language Models: Introducing Unconditional Token Forcing Confusion 24:21: Language-Universal Speech Attributes Modeling for Zero-Shot Multilingual Spoken Keyword Recognition 25:37: Deterministic Reversible Data Augmentation for Neural Machine Translation 26:39: CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks 28:14: Scalable MatMul-free Language Modeling 30:03: SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices 31:37: Mitigate Position Bias in Large Language Models via Scaling a Single Dimension 33:10: TopViewRS: Vision-Language Models as Top-View Spatial Reasoners

ArXiv NLP research for Tuesday, June 04, 2024.


00:20: Description Boosting for Zero-Shot Entity and Relation Classification

01:44: Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning

03:09: Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor

04:30: Prompting Large Language Models with Human Error Markings for Self-Correcting Machine Translation

05:41: mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models

06:53: Technical Language Processing for Telecommunications Specifications

08:09: On Affine Homotopy between Language Encoders

09:25: Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering

10:32: Probing the Category of Verbal Aspect in Transformer Language Models

11:58: Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection

13:03: LlamaCare: A Large Medical Language Model for Enhancing Healthcare Knowledge Sharing

14:33: Retaining Key Information under High Compression Ratios: Query-Guided Compressor for LLMs

15:51: On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept

17:30: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data

19:08: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding

20:07: Representations as Language: An Information-Theoretic Framework for Interpretability

21:32: Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding

22:46: Hiding Text in Large Language Models: Introducing Unconditional Token Forcing Confusion

24:21: Language-Universal Speech Attributes Modeling for Zero-Shot Multilingual Spoken Keyword Recognition

25:37: Deterministic Reversible Data Augmentation for Neural Machine Translation

26:39: CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks

28:14: Scalable MatMul-free Language Modeling

30:03: SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices

31:37: Mitigate Position Bias in Large Language Models via Scaling a Single Dimension

33:10: TopViewRS: Vision-Language Models as Top-View Spatial Reasoners

No similar episodes found.

No similar podcasts found.

URL copied to clipboard!