{"id":53867,"date":"2025-09-15T11:47:03","date_gmt":"2025-09-15T01:47:03","guid":{"rendered":"https:\/\/www.cloudproinc.com.au\/?p=53867"},"modified":"2025-09-15T11:47:06","modified_gmt":"2025-09-15T01:47:06","slug":"alpaca-vs-phi-3-for-fine-tuning","status":"publish","type":"post","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","title":{"rendered":"Alpaca vs Phi-3 for Fine-Tuning"},"content":{"rendered":"\n<p>In this blog post Alpaca vs Phi-3 for Instruction Fine-Tuning in Practice we will unpack the trade-offs between these two popular paths to instruction-tuned models, show practical steps to fine-tune them, and help you choose the right option for your team.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Instruction tuning teaches a general language model to follow human-written tasks (&#8220;Write a summary&#8221;, &#8220;Generate SQL&#8221;) reliably. Alpaca popularised low-cost instruction-tuning on top of a 7B base model. Phi-3 represents a new generation of small language models (SLMs) engineered for efficient reasoning and high utility per parameter. This post keeps things practical: a high-level comparison first, then concrete steps and code.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-high-level-overview\">High-level overview<\/h2>\n\n\n\n<p>Alpaca is a recipe: start with a capable base model (originally LLaMA-7B), fine-tune it on a curated set of instruction\u2013response pairs (about 52k), and get a model that follows prompts pretty well for its size. It proved you could get strong instruction-following performance with modest compute using methods like LoRA.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/category\/phi-3\/\">Phi-3<\/a> is a family of small language models trained by Microsoft on high-quality, reasoning-focused data. Out of the box, Phi-3 models come with strong instruction-following and reasoning capabilities and can be efficiently fine-tuned for domain tasks. They aim to deliver better accuracy-per-dollar and lower latency than older 7B baselines.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-technology-behind-instruction-tuning\">The technology behind instruction tuning<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Transformer decoder models: Both Alpaca-style and Phi-3 models are decoder-only transformers. They predict the next token conditioned on the prompt.<\/li>\n\n\n\n<li>Supervised fine-tuning (SFT): We show the model many examples of (instruction, optional input) \u2192 (ideal response). This aligns behaviour to follow tasks.<\/li>\n\n\n\n<li>Adapters with LoRA\/QLoRA: Instead of updating all weights, we train small low-rank adapter matrices on quantized base weights. This slashes GPU memory while preserving quality.<\/li>\n\n\n\n<li>Formatting and prompting: Consistent prompt templates, chat roles, and system messages are crucial. Instruction models can be brittle to format drift.<\/li>\n\n\n\n<li>Evaluation loops: After fine-tuning, evaluate with held-out tasks, spot-check for factuality and safety, and iterate.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-alpaca-really\">What is Alpaca, really?<\/h2>\n\n\n\n<p>Alpaca is a Stanford research project that fine-tuned the original LLaMA-7B on ~52k instruction\u2013response pairs generated via a larger model. The appeal was its simplicity and cost efficiency. Key points:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Base model: Originally LLaMA-7B (older architecture and license constraints). Many modern reproductions use Llama 2\/3 or open-llama variants.<\/li>\n\n\n\n<li>Data: Short, diverse instructions. Great for general instruction-following; limited on complex reasoning.<\/li>\n\n\n\n<li>Method: LoRA adapters on top of the base with a simple prompt template.<\/li>\n\n\n\n<li>Pros: Extremely accessible recipe; easy to replicate; runs on commodity GPUs.<\/li>\n\n\n\n<li>Cons: Results depend heavily on the base model; older Alpaca stacks may lag in safety, reasoning, and license suitability for commercial use. Check base-model terms.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-phi-3\">What is Phi-3?<\/h2>\n\n\n\n<p>Phi-3 is Microsoft\u2019s small language model family, engineered to be compact but strong at reasoning and instruction following. It\u2019s trained on high-quality, curated and synthetic data emphasizing correctness, explanations, and alignment. Highlights:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sizes: Multiple sizes (e.g., \u201cmini\u201d class around a few billion parameters). Good fit for edge and low-latency server inference.<\/li>\n\n\n\n<li>Quality focus: Emphasis on textbook-quality and safety-aware data, yielding robust out-of-the-box behavior.<\/li>\n\n\n\n<li>Efficiency: Strong accuracy-per-parameter and low memory footprint; ideal for QLoRA fine-tunes.<\/li>\n\n\n\n<li>Availability: Offered through common hubs and cloud catalogs. Review model-specific licensing and usage terms for your deployment context.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-head-to-head-comparison\">Head-to-head comparison<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data quality: Alpaca\u2019s original dataset is simple and synthetic; it may require augmentation for domain depth. Phi-3\u2019s training corpus emphasizes reasoning and safety, often reducing the need for large fine-tune sets.<\/li>\n\n\n\n<li>Performance per parameter: Modern Phi-3 variants typically outperform older 7B Alpaca-style models on reasoning-heavy tasks at similar or smaller sizes.<\/li>\n\n\n\n<li>Latency and cost: Phi-3\u2019s small sizes fine-tune and serve cheaply (especially with 4-bit quantization). An Alpaca stack on older 7B bases may need more VRAM and still underperform.<\/li>\n\n\n\n<li>Safety and alignment: Phi-3 benefits from curated data and alignment; Alpaca-style models depend on your data sanitation and the base model\u2019s guardrails.<\/li>\n\n\n\n<li>Ecosystem: Alpaca is a recipe you can apply to many bases (Llama 2\/3, Mistral). Phi-3 has an emerging ecosystem with good support in popular tooling.<\/li>\n\n\n\n<li>Licensing: Alpaca itself is a method; your actual license comes from the base model and data. Phi-3 has model-specific terms; verify commercial usage rights before shipping.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-when-to-choose-one-over-the-other\">When to choose one over the other<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Choose Alpaca-style if: You want a reproducible, transparent SFT recipe on a base you already vetted (e.g., Llama 2\/3), you need full control over data and prompting, and you accept to build your own guardrails.<\/li>\n\n\n\n<li>Choose Phi-3 if: You want strong default reasoning and efficient inference, plan to deploy on modest GPUs or edge, and prefer starting from a modern, safety-aware SLM with smaller fine-tuning demands.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-practical-fine-tuning-steps-applies-to-both\">Practical fine-tuning steps (applies to both)<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Define your goals: Which tasks, constraints, and success metrics (accuracy, latency, memory)?<\/li>\n\n\n\n<li>Assemble data: Start with an instruction dataset (e.g., Alpaca format). Add domain examples and counterexamples (edge cases). Balance breadth and depth.<\/li>\n\n\n\n<li>Choose a base: A modern, instruction-capable base saves time. If you need 7B+, consider newer architectures; otherwise Phi-3 &#8220;mini&#8221;-class can be plenty.<\/li>\n\n\n\n<li>Pick a prompt template: Consistency matters. Use a stable chat format for both training and inference.<\/li>\n\n\n\n<li>Train with QLoRA: 4-bit quantization + LoRA adapters keeps VRAM low with minimal quality loss.<\/li>\n\n\n\n<li>Evaluate: Use a held-out set; measure exact matches, BLEU\/ROUGE for text tasks, and human spot-checks for correctness and tone.<\/li>\n\n\n\n<li>Iterate: Patch data holes, adjust templates, tune hyperparameters (rank, alpha, learning rate).<\/li>\n\n\n\n<li>Harden: Add safety filters, constrain output where needed, and add monitoring.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-minimal-code-phi-3-qlora-sft\">Minimal code: Phi-3 QLoRA SFT<\/h2>\n\n\n\n<pre class=\"wp-block-code has-white-color has-black-background-color has-text-color has-background has-link-color wp-elements-3297a98d213581a2ee337454c6d89779\"><code># pip install -U transformers datasets peft accelerate bitsandbytes trl\n\nimport torch\nfrom datasets import load_dataset\nfrom transformers import (AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig)\nfrom peft import LoraConfig\nfrom trl import SFTTrainer, SFTTrainingArguments\n\nmodel_id = \"microsoft\/Phi-3-mini-4k-instruct\"  # Check license\/terms\n\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",\n    bnb_4bit_compute_dtype=torch.bfloat16,\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_id,\n    device_map=\"auto\",\n    quantization_config=bnb_config,\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\n\ndataset = load_dataset(\"tatsu-lab\/alpaca\", split=\"train\")  # replace with your data\n\n# Format: simple instruction \u2192 response. Keep template consistent.\ndef format_example(ex):\n    instr = ex.get(\"instruction\", \"\")\n    input_ = ex.get(\"input\", \"\").strip()\n    output = ex.get(\"output\", \"\")\n    if input_:\n        prompt = f\"&lt;|user|&gt;\\n{instr}\\n\\n{input_}\\n&lt;|assistant|&gt;\\n\"\n    else:\n        prompt = f\"&lt;|user|&gt;\\n{instr}\\n&lt;|assistant|&gt;\\n\"\n    return {\"text\": prompt + output}\n\ntrain_data = dataset.map(format_example, remove_columns=dataset.column_names)\n\npeft_config = LoraConfig(\n    r=16, lora_alpha=32, lora_dropout=0.05, target_modules=&#91;\"q_proj\",\"v_proj\"], bias=\"none\"\n)\n\nargs = SFTTrainingArguments(\n    output_dir=\".\/phi3-instruct-lora\",\n    per_device_train_batch_size=4,\n    gradient_accumulation_steps=4,\n    num_train_epochs=2,\n    learning_rate=2e-4,\n    bf16=True,\n    logging_steps=20,\n    save_steps=500,\n    optim=\"paged_adamw_8bit\",\n)\n\ntrainer = SFTTrainer(\n    model=model,\n    tokenizer=tokenizer,\n    peft_config=peft_config,\n    args=args,\n    train_dataset=train_data,\n    dataset_text_field=\"text\",\n)\n\ntrainer.train()\n\n# Save adapter\ntrainer.model.save_pretrained(\".\/phi3-instruct-lora\/adapter\")\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-minimal-code-alpaca-style-sft-on-a-llama-base\">Minimal code: Alpaca-style SFT on a Llama base<\/h2>\n\n\n\n<p>Many teams use the Alpaca recipe on a modern Llama base (e.g., Llama 2\/3) for better licenses and quality than the original LLaMA-7B. Replace the model ID with one you\u2019re approved to use.<\/p>\n\n\n\n<pre class=\"wp-block-code has-white-color has-black-background-color has-text-color has-background has-link-color wp-elements-4ce657b920369c364dad63a246e2d8db\"><code># pip install -U transformers datasets peft accelerate bitsandbytes trl\n\nimport torch\nfrom datasets import load_dataset\nfrom transformers import (AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig)\nfrom peft import LoraConfig\nfrom trl import SFTTrainer, SFTTrainingArguments\n\nmodel_id = \"meta-llama\/Llama-2-7b-hf\"  # Accept license on HF before use\n\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",\n    bnb_4bit_compute_dtype=torch.bfloat16,\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_id,\n    device_map=\"auto\",\n    quantization_config=bnb_config,\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)\n\nalpaca = load_dataset(\"tatsu-lab\/alpaca\", split=\"train\")\n\ndef alpaca_format(ex):\n    instr, inp, out = ex&#91;\"instruction\"], ex.get(\"input\",\"\"), ex&#91;\"output\"]\n    prompt = (\n        \"### Instruction:\\n\" + instr + \"\\n\\n\" +\n        (\"### Input:\\n\" + inp + \"\\n\\n\" if inp else \"\") +\n        \"### Response:\\n\"\n    )\n    return {\"text\": prompt + out}\n\ntrain_data = alpaca.map(alpaca_format, remove_columns=alpaca.column_names)\n\npeft_config = LoraConfig(r=16, lora_alpha=32, lora_dropout=0.05, target_modules=&#91;\"q_proj\",\"v_proj\"])\n\nargs = SFTTrainingArguments(\n    output_dir=\".\/llama2-alpaca-lora\",\n    per_device_train_batch_size=4,\n    gradient_accumulation_steps=4,\n    num_train_epochs=2,\n    learning_rate=2e-4,\n    bf16=True,\n)\n\ntrainer = SFTTrainer(\n    model=model,\n    tokenizer=tokenizer,\n    peft_config=peft_config,\n    args=args,\n    train_dataset=train_data,\n    dataset_text_field=\"text\",\n)\n\ntrainer.train()\ntrainer.model.save_pretrained(\".\/llama2-alpaca-lora\/adapter\")\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-hardware-and-cost-notes\">Hardware and cost notes<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>VRAM: Phi-3 &#8220;mini&#8221; QLoRA fine-tunes comfortably on a single 8\u201316 GB GPU. A 7B Llama base often prefers 16\u201324 GB for smoother throughput.<\/li>\n\n\n\n<li>Throughput: 4-bit quantization and gradient accumulation keep costs low with minimal quality trade-offs.<\/li>\n\n\n\n<li>Serving: Phi-3 &#8220;mini&#8221; can hit sub-50 ms\/token on modest GPUs. Quantized 7B models can also serve quickly but may require more memory.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-evaluation-safety-and-reliability\">Evaluation, safety, and reliability<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Task accuracy: Construct a held-out set aligned to your real user prompts. Track exact match, ROUGE\/BLEU, and latency.<\/li>\n\n\n\n<li>Behavioral checks: Red-team for jailbreaks, harmful content, and data leakage. Add rule-based or model-based filters if needed.<\/li>\n\n\n\n<li>Regression tests: Save prompts that broke previous versions; run them in CI before every release.<\/li>\n\n\n\n<li>Human-in-the-loop: For critical use-cases (e.g., healthcare, finance), require human review and detailed logging.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-deployment-tips\">Deployment tips<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Keep the training and inference prompt template identical.<\/li>\n\n\n\n<li>Export adapters separately; merge only when you need a single artifact.<\/li>\n\n\n\n<li>Use half-precision or 4-bit for serving to fit tighter memory budgets.<\/li>\n\n\n\n<li>Add simple guardrails: max output tokens, stop sequences, and content filters.<\/li>\n\n\n\n<li>Monitor drift: Track acceptance rates, objectionable content flags, and response length distributions over time.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-decision-checklist\">Decision checklist<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If you need strong reasoning at low cost and quick time-to-value: start with Phi-3 and light QLoRA.<\/li>\n\n\n\n<li>If you have a vetted 7B+ base model license and want full control over data\/prompting: Alpaca-style SFT is solid and predictable.<\/li>\n\n\n\n<li>If latency and memory are tight (edge\/CPU\/GPU-lite): Phi-3 &#8220;mini&#8221; class is often the easiest path.<\/li>\n\n\n\n<li>If you must align to a specific enterprise policy framework: pick the base with the clearest license and responsible AI posture, then fine-tune.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-conclusion\">Conclusion<\/h2>\n\n\n\n<p>Alpaca made instruction fine-tuning accessible; Phi-3 makes high-quality, efficient instruction models practical for production. If you\u2019re starting fresh and want the best accuracy-per-dollar, Phi-3 is a great default. If you already have a licensed Llama stack and a strong MLOps pipeline, the Alpaca recipe remains a reliable, transparent approach. In both cases, success hinges on your data quality, prompt consistency, and a tight evaluation loop.<\/p>\n\n\n\n<ul class=\"wp-block-yoast-seo-related-links yoast-seo-related-links\">\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/understanding-azure-phi-3\/\">Understanding Azure Phi-3<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/\">Practical ways to fine-tune LLMs<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/\">What is Supervised Fine-Tuning (SFT)<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/08\/19\/simple-python-ai-app-with-llama-to-describe-images\/\">Simple Python AI App with Llama to Describe Images<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/build-lean-reliable-net-docker-images-for-production\/\">Build Lean Reliable .NET Docker Images for Production<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>A practical comparison of Alpaca and Microsoft Phi-3 for instruction fine-tuning, with clear guidance, code snippets, and a decision checklist for teams balancing accuracy, cost, and compliance.<\/p>\n","protected":false},"author":1,"featured_media":53875,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"Alpaca vs Phi-3 for Fine-Tuning","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_yoast_wpseo_opengraph-title":"","_yoast_wpseo_opengraph-description":"","_yoast_wpseo_twitter-title":"","_yoast_wpseo_twitter-description":"","_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[13,77,88],"tags":[],"class_list":["post-53867","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","category-llm","category-phi-3"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Alpaca vs Phi-3 for Fine-Tuning - CPI Consulting<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Alpaca vs Phi-3 for Fine-Tuning\" \/>\n<meta property=\"og:description\" content=\"A practical comparison of Alpaca and Microsoft Phi-3 for instruction fine-tuning, with clear guidance, code snippets, and a decision checklist for teams balancing accuracy, cost, and compliance.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/\" \/>\n<meta property=\"og:site_name\" content=\"CPI Consulting\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-15T01:47:03+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-09-15T01:47:06+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cloudproinc.azurewebsites.net\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"CPI Staff\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"CPI Staff\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/\"},\"author\":{\"name\":\"CPI Staff\",\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\"},\"headline\":\"Alpaca vs Phi-3 for Fine-Tuning\",\"datePublished\":\"2025-09-15T01:47:03+00:00\",\"dateModified\":\"2025-09-15T01:47:06+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/\"},\"wordCount\":1310,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png\",\"articleSection\":[\"Blog\",\"LLM\",\"Phi-3\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/\",\"url\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/\",\"name\":\"Alpaca vs Phi-3 for Fine-Tuning - CPI Consulting\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#primaryimage\"},\"thumbnailUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png\",\"datePublished\":\"2025-09-15T01:47:03+00:00\",\"dateModified\":\"2025-09-15T01:47:06+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#primaryimage\",\"url\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2025\\\/09\\\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png\",\"width\":1536,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.cloudproinc.com.au\\\/index.php\\\/2025\\\/09\\\/15\\\/alpaca-vs-phi-3-for-fine-tuning\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Alpaca vs Phi-3 for Fine-Tuning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#website\",\"url\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/\",\"name\":\"Cloud Pro Inc - CPI Consulting Pty Ltd\",\"description\":\"Cloud, AI &amp; Cybersecurity Consulting | Melbourne\",\"publisher\":{\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#organization\",\"name\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\",\"url\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"contentUrl\":\"\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/favfinalfile.png\",\"width\":500,\"height\":500,\"caption\":\"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd\"},\"image\":{\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/#\\\/schema\\\/person\\\/192eeeb0ce91062126ce3822ae88fe6e\",\"name\":\"CPI Staff\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g\",\"caption\":\"CPI Staff\"},\"sameAs\":[\"http:\\\/\\\/www.cloudproinc.com.au\"],\"url\":\"https:\\\/\\\/cloudproinc.azurewebsites.net\\\/index.php\\\/author\\\/cpiadmin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Alpaca vs Phi-3 for Fine-Tuning - CPI Consulting","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","og_locale":"en_US","og_type":"article","og_title":"Alpaca vs Phi-3 for Fine-Tuning","og_description":"A practical comparison of Alpaca and Microsoft Phi-3 for instruction fine-tuning, with clear guidance, code snippets, and a decision checklist for teams balancing accuracy, cost, and compliance.","og_url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","og_site_name":"CPI Consulting","article_published_time":"2025-09-15T01:47:03+00:00","article_modified_time":"2025-09-15T01:47:06+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/cloudproinc.azurewebsites.net\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","type":"image\/png"}],"author":"CPI Staff","twitter_card":"summary_large_image","twitter_misc":{"Written by":"CPI Staff","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#article","isPartOf":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/"},"author":{"name":"CPI Staff","@id":"https:\/\/cloudproinc.azurewebsites.net\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e"},"headline":"Alpaca vs Phi-3 for Fine-Tuning","datePublished":"2025-09-15T01:47:03+00:00","dateModified":"2025-09-15T01:47:06+00:00","mainEntityOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/"},"wordCount":1310,"commentCount":0,"publisher":{"@id":"https:\/\/cloudproinc.azurewebsites.net\/#organization"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","articleSection":["Blog","LLM","Phi-3"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","url":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/","name":"Alpaca vs Phi-3 for Fine-Tuning - CPI Consulting","isPartOf":{"@id":"https:\/\/cloudproinc.azurewebsites.net\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#primaryimage"},"image":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","datePublished":"2025-09-15T01:47:03+00:00","dateModified":"2025-09-15T01:47:06+00:00","breadcrumb":{"@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#primaryimage","url":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","contentUrl":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","width":1536,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/www.cloudproinc.com.au\/index.php\/2025\/09\/15\/alpaca-vs-phi-3-for-fine-tuning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/cloudproinc.azurewebsites.net\/"},{"@type":"ListItem","position":2,"name":"Alpaca vs Phi-3 for Fine-Tuning"}]},{"@type":"WebSite","@id":"https:\/\/cloudproinc.azurewebsites.net\/#website","url":"https:\/\/cloudproinc.azurewebsites.net\/","name":"Cloud Pro Inc - CPI Consulting Pty Ltd","description":"Cloud, AI &amp; Cybersecurity Consulting | Melbourne","publisher":{"@id":"https:\/\/cloudproinc.azurewebsites.net\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cloudproinc.azurewebsites.net\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cloudproinc.azurewebsites.net\/#organization","name":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd","url":"https:\/\/cloudproinc.azurewebsites.net\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudproinc.azurewebsites.net\/#\/schema\/logo\/image\/","url":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","contentUrl":"\/wp-content\/uploads\/2022\/01\/favfinalfile.png","width":500,"height":500,"caption":"Cloud Pro Inc - Cloud Pro Inc - CPI Consulting Pty Ltd"},"image":{"@id":"https:\/\/cloudproinc.azurewebsites.net\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/cloudproinc.azurewebsites.net\/#\/schema\/person\/192eeeb0ce91062126ce3822ae88fe6e","name":"CPI Staff","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/2d96eeb53b791d92c8c50dd667e3beec92c93253bb6ff21c02cfa8ca73665c70?s=96&d=mm&r=g","caption":"CPI Staff"},"sameAs":["http:\/\/www.cloudproinc.com.au"],"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/author\/cpiadmin\/"}]}},"jetpack_featured_media_url":"\/wp-content\/uploads\/2025\/09\/alpaca-vs-phi-3-for-instruction-fine-tuning-in-practice.png","jetpack-related-posts":[{"id":53863,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/09\/15\/practical-ways-to-fine-tune-llms\/","url_meta":{"origin":53867,"position":0},"title":"Practical ways to fine-tune LLMs","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"A practical guide to LLM fine-tuning methods, when to use them, and how to implement LoRA and QLoRA with solid evaluation and safety steps.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 1x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 1.5x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 2x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 3x, \/wp-content\/uploads\/2025\/09\/practical-ways-to-fine-tune-llms-and-choosing-the-right-method.png 4x"},"classes":[]},{"id":53846,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/09\/15\/understanding-azure-phi-3\/","url_meta":{"origin":53867,"position":1},"title":"Understanding Azure Phi-3","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"Learn what Azure Phi-3 is, when to use it, and how to deploy and customize it across Azure, AKS, and the edge with practical steps and code snippets.","rel":"","context":"In &quot;Azure&quot;","block_context":{"text":"Azure","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/microsoft-azure\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png 1x, \/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png 1.5x, \/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png 2x, \/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png 3x, \/wp-content\/uploads\/2025\/09\/understanding-azure-phi-3-and-how-to-use-it-across-cloud-and-edge.png 4x"},"classes":[]},{"id":53688,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/08\/22\/what-is-supervised-fine-tuning-sft\/","url_meta":{"origin":53867,"position":2},"title":"What is Supervised Fine-Tuning (SFT)","author":"CPI Staff","date":"August 22, 2025","format":false,"excerpt":"A clear, detailed guide to supervised fine-tuning (SFT): what it is, when to use it, how to do it well, and how to evaluate, deploy, and govern SFT\u2019d models in production.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png 1x, \/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png 1.5x, \/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png 2x, \/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png 3x, \/wp-content\/uploads\/2025\/08\/what-is-supervised-fine-tuning-sft.png 4x"},"classes":[]},{"id":56966,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2026\/02\/05\/detecting-backdoors-in-open-weight-llms\/","url_meta":{"origin":53867,"position":3},"title":"Detecting Backdoors in Open-Weight LLMs","author":"CPI Staff","date":"February 5, 2026","format":false,"excerpt":"Open-weight language models can hide \u201csleeper\u201d behaviors that only appear under specific triggers. Here\u2019s a practical, team-friendly workflow to test, detect, and reduce backdoor risk before production.","rel":"","context":"In &quot;Blog&quot;","block_context":{"text":"Blog","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/blog\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2026\/02\/post-9.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2026\/02\/post-9.png 1x, \/wp-content\/uploads\/2026\/02\/post-9.png 1.5x, \/wp-content\/uploads\/2026\/02\/post-9.png 2x, \/wp-content\/uploads\/2026\/02\/post-9.png 3x, \/wp-content\/uploads\/2026\/02\/post-9.png 4x"},"classes":[]},{"id":53864,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/09\/15\/preparing-input-text-for-training-llms\/","url_meta":{"origin":53867,"position":4},"title":"Preparing Input Text for Training LLMs","author":"CPI Staff","date":"September 15, 2025","format":false,"excerpt":"Practical steps to clean, normalize, chunk, and structure text for training and fine-tuning LLMs, with clear explanations and runnable code.","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 1.5x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 2x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 3x, \/wp-content\/uploads\/2025\/09\/preparing-input-text-for-training-llms-that-perform-in-production.png 4x"},"classes":[]},{"id":53573,"url":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/2025\/08\/06\/how-to-code-and-build-a-gpt-large-language-model\/","url_meta":{"origin":53867,"position":5},"title":"How to Code and Build a GPT Large Language Model","author":"CPI Staff","date":"August 6, 2025","format":false,"excerpt":"In this blog post, you\u2019ll learn how to code and build a GPT LLM from scratch or fine-tune an existing one. We\u2019ll cover the architecture, key tools, libraries, frameworks, and essential resources to get you started fast. Table of contentsUnderstanding GPT LLM ArchitectureModel Architecture DiagramTools and Libraries to Build a\u2026","rel":"","context":"In &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/category\/ai\/"},"img":{"alt_text":"","src":"\/wp-content\/uploads\/2025\/08\/CreateLLM.png","width":350,"height":200,"srcset":"\/wp-content\/uploads\/2025\/08\/CreateLLM.png 1x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 1.5x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 2x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 3x, \/wp-content\/uploads\/2025\/08\/CreateLLM.png 4x"},"classes":[]}],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/posts\/53867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/comments?post=53867"}],"version-history":[{"count":2,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/posts\/53867\/revisions"}],"predecessor-version":[{"id":53882,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/posts\/53867\/revisions\/53882"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/media\/53875"}],"wp:attachment":[{"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/media?parent=53867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/categories?post=53867"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudproinc.azurewebsites.net\/index.php\/wp-json\/wp\/v2\/tags?post=53867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}