create account

Learning Prompt Engineering by wittythedev

View this thread on: hive.blogpeakd.comecency.com
· @wittythedev ·
$1.72
Learning Prompt Engineering
*Last month, I attended an event in Google and it was all about AI. It was a 2-day workshop where we learned about prompting and AI agents. I only joined the first day which was like an introduction to prompt engineering. But there's a recording about the topics discussed on the 2nd day so even if I wasn't able to join, I could still learn from the recording.*

![IMG_7235.jpg](https://files.peakd.com/file/peakd-hive/wittythedev/23xVP1vx6j4w9k9LRbV96ArKvSLe3M6w4NCXtGMJ2P7pREv9FE378KoiFj5Mn45uf6jFf.jpg)

*I'm writing this post as a takeaway/notes from what I've learned about prompt engineering based on the event I joined and based on this [whitepaper from Kaggle](https://www.kaggle.com/whitepaper-prompt-engineering). I wouldn't say I'm an expert in this subject, so if you found anything that you think is not right, please let me know in the comments. In that way, I can expand my knowledge and learn from you too.* 

---

# What Is Prompt Engineering 

Prompt Engineering is probably a term you'll stumble upon when you tackle about AI. To understand its meaning, it's better to address what prompt is first. 

In my understanding, **prompts** are the set of instructions or text that you feed to AI and in return, you'll get a response from AI about it. Good prompts will most likely generate good results, while bad prompts may produce unwanted results or hallucinations.

Prompt engineering is the process of tweaking and refining the prompts and then testing it. It's not just a simple method of giving an instruction to the AI - you'll have to keep refining it and take into consideration the model's behavior, the structure of your prompt and the settings you set. The purpose is to guide AI to produce high quality responses. Usually in each prompt, you need to generate more than a hundred tries and evaluate which is good on average. So prompt engineering basically involves a lot of experimentation and trial and error. 

---

# Model Settings

## Output Length

More tokens => longer text => slower response times => higher costs
Sometimes it's better to instruct AI to "keep it short and simple" rather than setting a "max output token".
```
Write a summary of this post (max 100 words).
```

Too short = maybe not good because it may leave out important details
Too long = maybe not good because it may include gibberish or repeat the same response

## Temperature

Temperature controls creativity.

Closer to 1 will work well for self consistency.
If you make it close to 0, 0 reduces diversity in the answer, which doesn't work for self consistency.

For less random answers, a low temperature (0~0.3) should be okay.
If you need right and wrong mix of answers, so temp should be closer to 0.7~1 but it will never give you a perfect answer but a decent answer.

## Top-K / Top-P Sampling

Top-K: pick from top K most likely next words
K = 1 => greedy, will pick the most likely, very predictable word
K = 3 => will randomly choose from top 3 most likely words
K = 5 => more varied results 

Top-P: pick from enough top words to reach P% certainty
Top-P = 0.9 => will give out 90% confidence
Top-P = 1 => include everything which will be fully random
Top-P = 0.1 => only the most confident few will be returned

| Setting | Effect | Example Scenarios For Usage |
|-|-|-|
| Low Top-K / Low Top-P | more predictable, safer answers | Math, problem solving |
| High Top-K / Low Top-P | creative, varied answers | Brainstorming, creative writing|

In combination with temperatures,
Low temperature + Low Top-K / Low Top-P => precise
High temperature + High Top-K / Low Top-P => creative

# Prompting Techniques

## Zero Shot 

The simplest type of prompt. Zero shot means no examples are provided to the AI. You ask, and you'll be given a direct answer. 


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23u6WumPLLwnHDowH2pY1HEHfjdY4ymVef46JfXG4oZt6asUKDKbj46iq13S7c8U5u7py.png)
 
In this example, I asked the AI and got responded based on what I asked - to classify the review. 

## One-shot And Few-shot

**One-shot** means you provide an example to the AI. 
**Few-shot** means you provide multiple examples to the AI. 
Providing examples will guide the AI's response. 


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23t76bthUzQmxAs9B7J9pU6p4n5XnmKDZm3wJzjq9kqVMQhKZTE7WVnu9rx2We5WsvUoa.png)

The above is a simple example of One-shot prompting. 
If I gave more examples, then it will become Few-shot prompting. 

You usually use one-shot and/or few-shot when zero-shot fails. 
When the pattern is easy to follow or has a fixed format, it's better to use one-shot. 
When it becomes complex or you want variations, it's better to use few-shot. 


## Self-Consistency

Basically it's like how we do a majority vote. 
Run the same prompt multiple times so you get multiple answers. 
From there, you see which is the most "correct" answer by determining which is common. 
This helps reduce randomness. 

Sample Prompt:
```
I have 10,000 yen in my wallet. A friend borrowed 500 for a drink. I bought 5 donuts for 220 yen each. How much money do I have left? 
```

✅ Try #1: 
```
You have 8,400 yen left.
```

✅ Try #2: 
```
You have 8,400 yen left.
```

❌ Try #3: 
```
You have 9,280 yen left.
```

If you run this prompt only once, it may give a wrong or random answer (*hallucination*) but now that we've ran it multiple times, we get a correct answer based on the majority of the results. 


## Chain of Thought (CoT)

To understand CoT better, we need to understand the Standard Prompting. 


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23uFueZwSDwkhdYCAtgzXCjReNgZKJUDnoYJV7gvTdSYfjpP9tgU2nNisBL3sCAKgdXkm.png)

This is an example of Standard or Direct prompting. 
The response is 45 which you would wonder how did it come up with that answer.
If you compute, it shouldn't arrive to this answer.
Standard prompting like this can give you an answer quickly but it's prone to *hallucinations* like these. 

Applying CoT, we'll tweak a bit of the same prompt. 


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23tSwpGL8DD6Jds3kGoxcrwFyoYs4LFHLupueqrSXjzyYETpXsWZAh3W15qYHJw2SP886.png)

Now, the response I'm getting has a reasoning and calculation behind it which now makes sense. 
It now arrived to the correct answer. 

Notice the difference? 
Just by changing the `Return the answer directly.` to `Let's think step by step.`, I was able to get the answer I am looking for. 

Although this will increase the tokens usage, the quality of the response is better than the standard prompting. 
This could prevent *hallucinations* but not completely eliminate it. 

## Tree of Thoughts (ToT)

It's like an advanced technique of CoT where there are many possible paths running in parallel. The image below best explains what ToT means. 

![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23uFwWZh4moYyoNwi75nXuJVSN1W17SDzc9kNbUEJKmxLV2rMM32ho1byuf2d4UsxEZMz.png)

<sub>source: https://arxiv.org/pdf/2305.10601</sub>

It's useful for very complex tasks like planning or brainstorming, creative writing, and/or puzzles and riddles. 

I tried to run this example prompt taken from this [github](https://github.com/dave1010/tree-of-thought-prompting).


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/Eo1wrX2WWvWMa7XMZSa2tm7owNNEzQddtGEu6AcGrNbDS8E927Jt7UdmHiCqXq88ULP.png)


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23t7AySwBrrxnZJjbYziH9AjbRWCuud4a5Kyc2YkQdQ4iCFTuAiUAy77jixBWuFDFRF3d.png)


![image.png](https://files.peakd.com/file/peakd-hive/wittythedev/23tGXuSR7vdwJ3VghHYkEx4mwsajUWjrqBDZqdTjLkTckq1mZVUNhfC2GWfxm4PgA2Ytr.png)


## ReAct (Reason + Act) 

Allows AI to think, do an action, observe, and repeat (but now with the new information from the first set of process) until there's an output. 

```
Thought 1 ->  Action 1 -> Observation 1 -> Thought 2 -> Action 2 -> Observation 2 -> … -> Output
```

Actions could involve searching the web, call APIs, run a code or anything. 
This is useful for AI agents or assistants or any multi step tasks.

This example is a simplified example generated by ChatGPT but in reality, this could get complicated. 

```
Question: How many books did author X write?

Thought: I need to find how many books Author X has written.
Action: Search “books by Author X”
Observation: Author X wrote 12 books.

Thought: Got it.
Final Answer: 12
```

We can also provide tools for AI to refer when performing actions. 

---

# Conclusion

- Prompting is an iterative process of refining and testing
- Try to generate a hundred times for each prompt to see and evaluate which is better
- Start simple (use zero-shot) then move on to different complicated techniques as needed
- Be specific and clear with no room for ambiguity when writing prompts
- Instructions over constraints: tell what to do rather than what not to do
- Add your most important instructions at the beginning of your prompt or at the end of the prompt, not in the middle 
- Use imperative expressions when writing prompts (e.g. `You MUST use the code execution tool to generate and execute the code.`)
- Use prompt optimizer tools to improve prompts
- Experiment and document => 2 keys to mastering prompt engineering




---

Thanks for reading!
See you around! じゃあ、またね!

<sub>
All images are screenshots from the [Kaggle notebook](https://www.kaggle.com/code/markishere/day-1-prompting) and the Google AI Studio (using Gemini) unless stated otherwise.
Most prompts are from the example in Kaggle notebooks - Day 1 laboratory.
All information here are from my notes and takeaways from the seminar and articles online like [Prompt Engineering Guide](https://www.promptingguide.ai/).
</sub>


👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 267 others
properties (23)
authorwittythedev
permlinklearning-prompt-engineering
categoryhive-196387
json_metadata"{"app":"peakd/2025.7.1","format":"markdown","description":"Just my notes on what I've learned about prompt engineering","tags":["promptengineering","ai","prompting","programming","development","pob","stemsocial","creativecoin","wittythedev"],"users":[],"image":["https://files.peakd.com/file/peakd-hive/wittythedev/23xVP1vx6j4w9k9LRbV96ArKvSLe3M6w4NCXtGMJ2P7pREv9FE378KoiFj5Mn45uf6jFf.jpg","https://files.peakd.com/file/peakd-hive/wittythedev/23u6WumPLLwnHDowH2pY1HEHfjdY4ymVef46JfXG4oZt6asUKDKbj46iq13S7c8U5u7py.png","https://files.peakd.com/file/peakd-hive/wittythedev/23t76bthUzQmxAs9B7J9pU6p4n5XnmKDZm3wJzjq9kqVMQhKZTE7WVnu9rx2We5WsvUoa.png","https://files.peakd.com/file/peakd-hive/wittythedev/23uFueZwSDwkhdYCAtgzXCjReNgZKJUDnoYJV7gvTdSYfjpP9tgU2nNisBL3sCAKgdXkm.png","https://files.peakd.com/file/peakd-hive/wittythedev/23tSwpGL8DD6Jds3kGoxcrwFyoYs4LFHLupueqrSXjzyYETpXsWZAh3W15qYHJw2SP886.png","https://files.peakd.com/file/peakd-hive/wittythedev/23uFwWZh4moYyoNwi75nXuJVSN1W17SDzc9kNbUEJKmxLV2rMM32ho1byuf2d4UsxEZMz.png","https://files.peakd.com/file/peakd-hive/wittythedev/Eo1wrX2WWvWMa7XMZSa2tm7owNNEzQddtGEu6AcGrNbDS8E927Jt7UdmHiCqXq88ULP.png","https://files.peakd.com/file/peakd-hive/wittythedev/23t7AySwBrrxnZJjbYziH9AjbRWCuud4a5Kyc2YkQdQ4iCFTuAiUAy77jixBWuFDFRF3d.png","https://files.peakd.com/file/peakd-hive/wittythedev/23tGXuSR7vdwJ3VghHYkEx4mwsajUWjrqBDZqdTjLkTckq1mZVUNhfC2GWfxm4PgA2Ytr.png"]}"
created2025-07-09 14:28:27
last_update2025-07-09 14:28:27
depth0
children2
last_payout1969-12-31 23:59:59
cashout_time2025-07-16 14:28:27
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value1.720 HBD
promoted0.000 HBD
body_length9,744
author_reputation1,357,239,420,868
root_title"Learning Prompt Engineering"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id143,895,573
net_rshares5,487,475,378,875
author_curate_reward""
vote details (331)
@hivebuzz ·
Congratulations @wittythedev! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

<table><tr><td><img src="https://images.hive.blog/60x70/https://hivebuzz.me/@wittythedev/upvoted.png?202507091433"></td><td>You received more than 2250 upvotes.<br>Your next target is to reach 2500 upvotes.</td></tr>
</table>

<sub>_You can view your badges on [your board](https://hivebuzz.me/@wittythedev) and compare yourself to others in the [Ranking](https://hivebuzz.me/ranking)_</sub>
<sub>_If you no longer want to receive notifications, reply to this comment with the word_ `STOP`</sub>

properties (22)
authorhivebuzz
permlinknotify-1752071835
categoryhive-196387
json_metadata{"image":["https://hivebuzz.me/notify.t6.png"]}
created2025-07-09 14:37:15
last_update2025-07-09 14:37:15
depth1
children0
last_payout1969-12-31 23:59:59
cashout_time2025-07-16 14:37:15
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length637
author_reputation369,397,183,320,581
root_title"Learning Prompt Engineering"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id143,895,890
net_rshares0
@stemsocial ·
$0.01
re-wittythedev-learning-prompt-engineering-20250710t020023804z
<div class='text-justify'> <div class='pull-left'>
 <img src='https://stem.openhive.network/images/stemsocialsupport7.png'> </div>

Thanks for your contribution to the <a href='/trending/hive-196387'>STEMsocial community</a>. Feel free to join us on <a href='https://discord.gg/9c7pKVD'>discord</a> to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive.&nbsp;<br />&nbsp;<br />
</div>
👍  
properties (23)
authorstemsocial
permlinkre-wittythedev-learning-prompt-engineering-20250710t020023804z
categoryhive-196387
json_metadata{"app":"STEMsocial"}
created2025-07-10 02:00:24
last_update2025-07-10 02:00:24
depth1
children0
last_payout1969-12-31 23:59:59
cashout_time2025-07-17 02:00:24
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.005 HBD
promoted0.000 HBD
body_length646
author_reputation22,918,491,691,707
root_title"Learning Prompt Engineering"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id143,917,469
net_rshares16,780,697,749
author_curate_reward""
vote details (1)