Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Parameter-Efficient Fine-Tuning of Compact Language Models for Professional LinkedIn Post Generation
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The rise of generative AI has transformed social media content creation, but professional networking platforms such as LinkedIn demand domain-specific, contextually accurate, and stylistically appropriate outputs. Existing large language model (LLM) solutions are often computationally expensive and lack specialization for professional contexts. This paper presents a lightweight, parameter-efficient approach for LinkedIn post generation using Low-Rank Adaptation (LoRA) applied to TinyLlama-1.1B-Chat. A curated dataset of 500 job-related instruction–output pairs was extracted, preprocessed, and fine-tuned in a Google Colab environment using Hugging Face Transformers and PEFT libraries. Evaluation combined lexical similarity via difflib.SequenceMatcher with precision, recall, and F1-scores against human-curated references. The fine-tuned model achieved an average similarity of 0.78 and correctness in 85% of cases, with precision, recall, and F1 all at approximately 75%. LoRA reduced trainable parameters to 8.8M (0.8% of the full model), cutting memory usage and training time by about 70% while maintaining stable training curves and robust generalization to unseen prompts. Generated posts were qualitatively assessed as professional, coherent, and contextually relevant. These results demonstrate the feasibility of creating resource-efficient, domain-specific generators for professional communication. The proposed pipeline offers a replicable and scalable framework for automating LinkedIn post creation, bridging the gap between general-purpose LLMs and specialized professional content generation.