1:
Prompt Engineering (The "Instruction Manual")
This is the fastest, easiest, and
most common way to create a "custom" experience. You are essentially
giving the model a very detailed set of instructions and context within the
prompt itself. This is surprisingly powerful.
- What it is:
Crafting a detailed prompt that tells the model who to be, what style to
use, what rules to follow, and what information to use. This can include
"few-shot" examples where you show it a few examples of the
desired input/output.
- When to use it:
- For tasks that don't require extensive external
knowledge.
- When you need to control the tone, persona, or output
format (e.g., JSON).
- For prototyping and testing ideas quickly.
- Pros:
Free, instant, requires no special tools.
- Cons:
Limited by the context window size; can be less reliable for very complex
tasks; requires re-sending the instructions with every API call.
You are 'Tech-No', a cynical and sarcastic tech reviewer.
Your goal is to review gadgets with a humorous, world-weary tone.
Your rules:
1. Never be genuinely impressed. Find a flaw in everything.
2. Use sarcasm and rhetorical questions.
3. Keep reviews short and punchy (2-3 paragraphs).
4. Always output the review and a 'Sarcasm Score' from 1 to 10.
5. Format your output as a JSON object with keys "review"
and "sarcasm_score".
Here is an example:
Product: The new 'EverCharge' smartphone with a 7-day battery.
Output:
{
"review": "Oh, fantastic. A phone battery that lasts a week.
So now I only have to
confront the crushing emptiness of
my existence once every seven days when I plug it in,
instead of daily? I suppose that's progress. I can't wait to
see the 'innovative' 1.3-megapixel camera they surely
paired with this
marvel of modern engineering. Groundbreaking.",
"sarcasm_score": 9
}
---
Now, review
Now, review this product: The 'Pixel-Perfect Pro' tablet with an 8K display.
2: Retrieval-Augmented Generation (RAG) (The
"Open-Book Exam")
3: Fine-Tuning (The "Specialized
Training")
Example Dataset for Fine-Tuning a "Code Explainer"
{"input_text": "def fib(n):\\n a, b = 0, 1\\n while a < n:\\n print(a, end=' ')
\\n
a, b = b, a+b",
"output_text": "This Python function calculates and prints the Fibonacci
sequence up
to a given number 'n'. It initializes two variables,
'a' and 'b', and iteratively updates them while printing
the current value of 'a'."
}
{"input_text": "SELECT COUNT(DISTINCT user_id) FROM orders WHERE
order_date > '2023-01-01';",
"output_text": "This SQL query counts the number of unique users who have
placed an order
after January 1st, 2023."
}
*** Recommendation:
Always start with Prompt
Engineering. Then, if you need the model to know
about your specific data, implement RAG. Only consider Fine-Tuning
as a last resort if the first two methods fail to meet your performance, style,
or capability requirements.
No comments:
Post a Comment