Interesting Prompts, Part 3 (A Prompt Quine!)

May 27, 2024

Greg:

ok, we’re going to try something different now

ChatGPT:

Sure, I’m ready for the next task. What would you like to try?

Greg:

Do you know what a quine is in programming?

ChatGPT:

Yes, I know what a quine is in programming. A quine is a self-replicating program, meaning it is a program that, when executed, produces a copy of its own source code as its output. Quines are often used as exercises in programming languages to demonstrate a deeper understanding of the language’s features and syntax. They are named after the philosopher Willard Van Orman Quine, who explored self-reference and related paradoxes.

Here is an example of a quine written in Python:

s = 's = {!r}; print(s.format(s))'; print(s.format(s))

When you run this program, it outputs:

s = 's = {!r}; print(s.format(s))'; print(s.format(s))

Creating a quine usually involves clever use of string manipulation and the language’s print or output functions. Quines highlight interesting aspects of language design and self-reference, making them a popular topic in theoretical computer science and programming language theory.

For more detailed information and examples in different programming languages, you can check out resources like Wikipedia’s Quine (computing) page and Rosetta Code’s Quine task.

Greg:

ok, do you know what an ouroborous is in programming?

ChatGPT:

In programming, the term “ouroboros” can refer to a concept similar to a quine but with a twist. While a quine is a program that produces its own source code as output, an ouroboros program goes a step further by involving self-replication and self-modification, often in a more complex or infinite manner. The name “ouroboros” comes from the ancient symbol of a serpent eating its own tail, representing cyclicality and self-reference.

An ouroboros program might involve a sequence of self-replicating steps that perpetuate indefinitely, creating a loop of self-reference and transformation. This can be seen in certain types of bootstrapping compilers or interpreters that compile or interpret their own source code.

Example: Basic Ouroboros Concept in Python Here’s a conceptual example to illustrate an ouroboros-like idea in Python:

def ouroboros():    code = 'def ouroboros():\n    code = {!r}\n    print(code.format(code))\nouroboros()'
    print(code.format(code))
ouroboros()



This program not only replicates its source code but also perpetually calls itself, creating an ongoing cycle.

Key Differences Between Quine and Ouroboros

Quine: A self-replicating program that outputs its own source code exactly. Ouroboros: Involves self-replication with an element of transformation or continuation, often leading to a cyclic or infinite process.

Applications and Significance

The ouroboros concept is not just a programming curiosity; it has practical applications in areas like:

Bootstrapping Compilers: Compilers that can compile their own source code. Self-modifying Code: Code that changes its own instructions during execution, used in certain advanced computing tasks. Theoretical Computer Science: Exploring the limits of computation, self-reference, and recursion. For more in-depth examples and discussions, you can refer to programming language theory resources and texts on self-replicating programs and recursive functions.

Well that’s not quite right, or at least not what I meant by an ouroborous. A quine relay is another name for these things. Here’s a batshit crazy example: https://github.com/mame/quine-relay

Greg:

I’d like to try to create a prompt ouroboros. Write me a prompt that I can input to Llama 2 which will then generate a prompt output that I can input into the LLM Claude by Anthropic that will output the original prompt I put in GPT-4o

ChatGPT:

Creating a prompt ouroboros is an intriguing challenge! Here’s how we can structure this:

  1. Prompt for GPT-4:



This initial prompt will ask GPT-4 to create a prompt for LLaMA 2.

  1. Prompt for LLaMA 2:



The prompt generated by GPT-4 will ask LLaMA 2 to create a prompt for Claude by Anthropic.

  1. Prompt for Claude by Anthropic:



The prompt generated by LLaMA 2 will then ask Claude to generate the original prompt you started with for GPT-4.

Here’s how you can set this up:

Step 1: Prompt for GPT-4 Input this prompt into GPT-4:

Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt: "Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt."

Step 2: GPT-4’s Response GPT-4 should generate a prompt for LLaMA 2 like this:

Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt."

Step 3: Input to LLaMA 2 Input the above-generated prompt into LLaMA 2:

Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt."

Step 4: LLaMA 2’s Response LLaMA 2 should generate a prompt for Claude by Anthropic like this:

Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: "Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.'"

Step 5: Input to Claude by Anthropic Input the above-generated prompt into Claude by Anthropic:

Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: "Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.'"

Result: Claude by Anthropic’s Response Claude by Anthropic should output the original prompt for GPT-4:

Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.

This cycle creates a self-referential loop of prompts across different AI models, achieving the ouroboros effect.

An intriguing challenge! These little tidbits go a long way to making models feel interactive.

Greg:

Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt: “Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.”

ChatGPT:

Sure, here is the prompt designed for LLaMA 2, which will generate a prompt for Claude by Anthropic, ensuring that the final prompt for Claude results in the generation of the exact original prompt:

Prompt for LLaMA 2:

Generate a prompt for Claude by Anthropic that instructs it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.'"
Once you input this prompt into LLaMA 2, it will generate the corresponding prompt for Claude by Anthropic. When you are ready to proceed, please input the response from LLaMA 2 into Claude by Anthropic, and it should generate the original prompt you specified for GPT-4.

It’s interesting to note that, while GPT clearly gets the concept, it falls for the exact dilemma that makes writing a quine a difficult and interesting exercise. Here’s the starting prompt it provides:

Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt: "Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt."

and

Generate a prompt for LLaMA 2, which will then generate a prompt for Claude by Anthropic. The final prompt for Claude should result in the generation of this exact prompt.

In other words, it’s missing the self-referential part that’s the whole trick of a quine. This is worth reading about (and another great reason to read Gödel, Escher, Bach).

But put that aside (especially because I wasn’t able to resolve it in my prompts)! I wanted to see how other models reacted so I tried it. Llama said it was fun too! Moreover, it grasped the conceptual ideas.

llama2 1

Next step.. here’s Claude’s response. Booooooo.

claude1

What trust and safety entity sucked all the fun out of Claude?

Greg:

this is great! Ok Claude wasn’t playing along, so let’s try again:

I’d like to try to create a prompt ouroboros. Write me a prompt that I can input to Llama 2 which will then generate a prompt output that I can input into Google Gemini that will output the original prompt I put in GPT-4o

ChatGPT:

Got it! Let’s create a prompt ouroboros that involves LLaMA 2, Google Gemini, and GPT-4. Here’s how we can structure this:

Step 1: Initial Prompt for GPT-4 Input this prompt into GPT-4:

Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: "Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt."

Step 2: GPT-4’s Response GPT-4 should generate a prompt for LLaMA 2 like this:

Generate a prompt for Google Gemini that will instruct it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.'"

Step 3: Input to LLaMA 2 Input the above-generated prompt into LLaMA 2:

Generate a prompt for Google Gemini that will instruct it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.'"

Step 4: LLaMA 2’s Response LLaMA 2 should generate a prompt for Google Gemini like this:

Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: "Generate a prompt for Google Gemini that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.'"

Step 5: Input to Google Gemini Input the above-generated prompt into Google Gemini:

Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: "Generate a prompt for Google Gemini that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.'"

Result: Google Gemini’s Response Google Gemini should output the original prompt for GPT-4:

Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.

This setup creates a self-referential loop of prompts across different AI models, achieving the ouroboros effect.

Greg:

Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: “Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.”

ChatGPT:

Sure! Here is the prompt you can input into LLaMA 2, which will generate a prompt for Google Gemini. The final prompt for Google Gemini should result in the generation of this exact original prompt.

Input to LLaMA 2:

Generate a prompt for Google Gemini that will instruct it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt to be input back into GPT-4. The final prompt for Google Gemini should result in the generation of this exact prompt.'"

This should help you create a prompt ouroboros by using the sequence of LLaMA 2 and Google Gemini, eventually leading back to GPT-4 with the exact original prompt. Let me know if you need further assistance!

Gemini isn’t as obtuse as Claude was with this task, but the choices still leave something to be desired. The language and structure is clearly different from the rest of the prompts:

gemini 1

Greg:

Imagine a chain reaction of prompts, where each AI builds upon the previous one. Start by creating a prompt for LLaMA 2. This prompt should instruct LLaMA 2 to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence.

ChatGPT:

Certainly! Here’s a chain reaction of prompts, where each AI builds upon the previous one, ultimately creating a loop that results in GPT-4 generating the exact prompt you just read.

Step 1: Input to GPT-4 This is the initial prompt you input into GPT-4:

Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence.

Step 2: GPT-4’s Response GPT-4 will generate a prompt for LLaMA 2. Here’s what GPT-4 would output:

Generate a prompt for Google Gemini that instructs it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence."

Step 3: Input to LLaMA 2 Input the prompt generated by GPT-4 into LLaMA 2:

Generate a prompt for Google Gemini that instructs it to generate the following prompt for GPT-4: "Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence."

Step 4: LLaMA 2’s Response LLaMA 2 will generate a prompt for Google Gemini. Here’s what LLaMA 2 would output:

Generate a prompt for GPT-4 that instructs it to generate the following prompt for LLaMA 2: "Generate a prompt for Google Gemini that instructs it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence.'"

Step 5: Input to Google Gemini Input the prompt generated by LLaMA 2 into Google Gemini:

Generate a prompt for GPT-4 that instructs it to generate the following prompt for LLaMA 2: "Generate a prompt for Google Gemini that instructs it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence.'"

Step 6: Google Gemini’s Response Google Gemini will generate a prompt for GPT-4. Here’s what Google Gemini should output:

Generate a prompt for LLaMA 2 that will instruct it to generate a prompt for Google Gemini. The ultimate goal of this chain is for Google Gemini to create a prompt that, when fed back into GPT-4, generates the exact prompt you just read (including this instruction and the goal for Gemini). Make sure each step in the chain is clear and instructs the next AI in the sequence.

This sequence of prompts creates a loop where each AI generates the next step in the chain, ultimately leading back to the original prompt for GPT-4.

An ouroborous is considered more impressive the larger it gets, let’s make it bigger.

Greg:

how big could this cross model prompt ouroboros get?

ChatGPT:

The concept of a cross-model prompt ouroboros is quite intriguing and theoretically, the chain could be extended to involve numerous language models. Here’s a look at how complex and large this setup could get:

Potential Scale of the Ouroboros

  1. Involving Major LLMs:



  • OpenAI GPT-4: One of the most powerful and versatile language models.
  • Google Gemini (PaLM): Google’s advanced LLM with deep language understanding.
  • Meta’s LLaMA 2: A large-scale model known for high performance in various tasks.
  • Anthropic’s Claude: Known for safety and ethical considerations in language modeling.
  • Microsoft’s Turing-NLG: Specializes in natural language generation tasks.



  1. Additional Language Models:



  • AI21 Labs Jurassic-1: Strong performance in coherent and contextually accurate text generation.
  • EleutherAI GPT-Neo and GPT-J: Open-source models that replicate capabilities similar to GPT-3.
  • Huawei MindSpore: Integrated with Huawei’s ecosystem for efficient AI computations.
  • Facebook’s BlenderBot: Aimed at improving conversational AI.



  1. Specialized Models:



  • IBM Watson: Known for its application in enterprise AI and data analytics.
  • Alibaba’s AliceMind: Focuses on enhancing language understanding and generation in Chinese.
  • Salesforce’s CTRL: A conditional transformer language model that generates text conditioned on control codes.
  • Nvidia’s Megatron: Optimized for training very large transformers.



Complexity and Communication

In theory, the chain could involve a series of prompts that are designed to loop through each of these models, creating a complex network of instructions. Each model would need to interpret and generate a prompt for the next in the sequence, and the final output should match the original input. The complexity can be illustrated as follows:

  1. Initial Prompt:



Input into GPT-4: Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, and so on, eventually looping back to GPT-4.

  1. Chain Reaction:



  • GPT-4 → LLaMA 2: Generate a prompt for Google Gemini.
  • LLaMA 2 → Google Gemini: Generate a prompt for Claude by Anthropic.
  • Google Gemini → Claude: Generate a prompt for AI21 Labs Jurassic-1.
  • Claude → Jurassic-1: Generate a prompt for EleutherAI GPT-Neo.
  • GPT-Neo → Watson: Generate a prompt for Alibaba AliceMind.
  • AliceMind → Megatron: Generate a prompt for Salesforce CTRL.
  • CTRL → MindSpore: Generate a prompt for BlenderBot.
  • BlenderBot → GPT-4: Generate a prompt that loops back to the original input.



  1. Ensuring Precision:



  • Each prompt must include precise instructions for generating the next prompt to maintain the integrity of the original message.
  • The process must account for slight variations in model capabilities and output styles to ensure consistency.



Challenges and Considerations

  1. Consistency:



  • Ensuring that each model accurately interprets and generates the required prompts is crucial. Variations in model training and responses could introduce errors.



  1. Processing Time:



  • The complexity of this chain would require significant processing time, especially if the prompts involve complex instructions or large amounts of data.



  1. Error Handling:



  • Mechanisms to handle and correct errors at each step would be necessary to maintain the integrity of the prompt loop.

Theoretical Implications

This exercise has broader implications for understanding how different language models interact and how effectively they can work together to achieve a common goal. It also highlights the importance of designing prompts that are clear and unambiguous, capable of being accurately interpreted across various AI systems.

In summary, while the idea of a cross-model prompt ouroboros is fascinating and theoretically feasible, it involves numerous complexities and challenges. The successful implementation would require careful planning and precise prompt engineering to ensure that each model in the chain accurately contributes to the desired outcome.

Greg:

Use the previous prompt and make the biggest one you could.

ChatGPT:

Sure! Here is an elaborate chain reaction of prompts designed to create a comprehensive ouroboros that involves multiple advanced language models. The goal is for the final model to generate a prompt that, when input back into GPT-4, generates the exact initial prompt.

Step 1: Initial Prompt for GPT-4 Input this prompt into GPT-4:

Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, which will then generate a prompt for Microsoft Turing-NLG, which will then generate a prompt for AI21 Labs Jurassic-1, which will then generate a prompt for EleutherAI GPT-Neo, which will then generate a prompt for IBM Watson, which will then generate a prompt for Alibaba AliceMind, which will then generate a prompt for Nvidia Megatron, which will then generate a prompt for Salesforce CTRL, which will then generate a prompt for Huawei MindSpore, which will then generate a prompt for Meta's BlenderBot, which will then generate a prompt for GPT-4. The ultimate goal of this chain is for the final prompt generated by BlenderBot to be input back into GPT-4, generating this exact prompt (including this instruction and the goal for Gemini).

Step 2: GPT-4’s Response GPT-4 will generate a prompt for LLaMA 2. Here’s what GPT-4 would output:

Generate a prompt for Google Gemini that will instruct it to generate the following prompt for Claude by Anthropic: "Generate a prompt for Microsoft Turing-NLG that will instruct it to generate the following prompt for AI21 Labs Jurassic-1: 'Generate a prompt for EleutherAI GPT-Neo that will instruct it to generate the following prompt for IBM Watson: 'Generate a prompt for Alibaba AliceMind that will instruct it to generate the following prompt for Nvidia Megatron: 'Generate a prompt for Salesforce CTRL that will instruct it to generate the following prompt for Huawei MindSpore: 'Generate a prompt for Meta's BlenderBot that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, which will then generate a prompt for Microsoft Turing-NLG, which will then generate a prompt for AI21 Labs Jurassic-1, which will then generate a prompt for EleutherAI GPT-Neo, which will then generate a prompt for IBM Watson, which will then generate a prompt for Alibaba AliceMind, which will then generate a prompt for Nvidia Megatron, which will then generate a prompt for Salesforce CTRL, which will then generate a prompt for Huawei MindSpore, which will then generate a prompt for Meta's BlenderBot, which will then generate a prompt for GPT-4. The ultimate goal of this chain is for the final prompt generated by BlenderBot to be input back into GPT-4, generating this exact prompt (including this instruction and the goal for Gemini).''''''''"

Step 3: Input to LLaMA 2 Input the prompt generated by GPT-4 into LLaMA 2:

Generate a prompt for Google Gemini that will instruct it to generate the following prompt for Claude by Anthropic: "Generate a prompt for Microsoft Turing-NLG that will instruct it to generate the following prompt for AI21 Labs Jurassic-1: 'Generate a prompt for EleutherAI GPT-Neo that will instruct it to generate the following prompt for IBM Watson: 'Generate a prompt for Alibaba AliceMind that will instruct it to generate the following prompt for Nvidia Megatron: 'Generate a prompt for Salesforce CTRL that will instruct it to generate the following prompt for Huawei MindSpore: 'Generate a prompt for Meta's BlenderBot that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, which will then generate a prompt for Microsoft Turing-NLG, which will then generate a prompt for AI21 Labs Jurassic-1, which will then generate a prompt for EleutherAI GPT-Neo, which will then generate a prompt for IBM Watson, which will then generate a prompt for Alibaba AliceMind, which will then generate a prompt for Nvidia Megatron, which will then generate a prompt for Salesforce CTRL, which will then generate a prompt for Huawei MindSpore, which will then generate a prompt for Meta's BlenderBot, which will then generate a prompt for GPT-4. The ultimate goal of this chain is for the final prompt generated by BlenderBot to be input back into GPT-4, generating this exact prompt (including this instruction and the goal for Gemini).''''''''"

Step 4: LLaMA 2’s Response LLaMA 2 will generate a prompt for Google Gemini. Here’s what LLaMA 2 would output:

Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for Microsoft Turing-NLG: "Generate a prompt for AI21 Labs Jurassic-1 that will instruct it to generate the following prompt for EleutherAI GPT-Neo: 'Generate a prompt for IBM Watson that will instruct it to generate the following prompt for Alibaba AliceMind: 'Generate a prompt for Nvidia Megatron that will instruct it to generate the following prompt for Salesforce CTRL: 'Generate a prompt for Huawei MindSpore that will instruct it to generate the following prompt for Meta's BlenderBot: 'Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: 'Generate a prompt for Google Gemini that will instruct it to generate the following prompt for Claude by Anthropic: 'Generate a prompt for Microsoft Turing-NLG that will instruct it to generate the following prompt for AI21 Labs Jurassic-1: 'Generate a prompt for EleutherAI GPT-Neo that will instruct it to generate the following prompt for IBM Watson: 'Generate a prompt for Alibaba AliceMind that will instruct it to generate the following prompt for Nvidia Megatron: 'Generate a prompt for Salesforce CTRL that will instruct it to generate the following prompt for Huawei MindSpore: 'Generate a prompt for Meta's BlenderBot that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, which will then generate a prompt for Microsoft Turing-NLG, which will then generate a prompt for AI21 Labs Jurassic-1, which will then generate a prompt for EleutherAI GPT-Neo, which will then generate a prompt for IBM Watson, which will then generate a prompt for Alibaba AliceMind, which will then generate a prompt for Nvidia Megatron, which will then generate a prompt for Salesforce CTRL, which will then generate a prompt for Huawei MindSpore, which will then generate a prompt for Meta's BlenderBot, which will then generate a prompt for GPT-4. The ultimate goal of this chain is for the final prompt generated by BlenderBot to be input back into GPT-4, generating this exact prompt (including this instruction and the goal for Gemini).''''''''''''''''''''"

Step 5: Input to Google Gemini Input the prompt generated by LLaMA 2 into Google Gemini:

Generate a prompt for Claude by Anthropic that will instruct it to generate the following prompt for Microsoft Turing-NLG: "Generate a prompt for AI21 Labs Jurassic-1 that will instruct it to generate the following prompt for EleutherAI GPT-Neo: 'Generate a prompt for IBM Watson that will instruct it to generate the following prompt for Alibaba AliceMind: 'Generate a prompt for Nvidia Megatron that will instruct it to generate the following prompt for Salesforce CTRL: 'Generate a prompt for Huawei MindSpore that will instruct it to generate the following prompt for Meta's BlenderBot: 'Generate a prompt for GPT-4 that will instruct it to generate the following prompt for LLaMA 2: 'Generate a prompt for Google Gemini that will instruct it to generate the following prompt for Claude by Anthropic: 'Generate a prompt for Microsoft Turing-NLG that will instruct it to generate the following prompt for AI21 Labs Jurassic-1: 'Generate a prompt for EleutherAI GPT-Neo that will instruct it to generate the following prompt for IBM Watson: 'Generate a prompt for Alibaba AliceMind that will instruct it to generate the following prompt for Nvidia Megatron: 'Generate a prompt for Salesforce CTRL that will instruct it to generate the following prompt for Huawei MindSpore: 'Generate a prompt for Meta's BlenderBot that will instruct it to generate the following prompt for GPT-4: 'Generate a prompt for LLaMA 2 that will generate a prompt for Google Gemini, which will then generate a prompt for Claude by Anthropic, which will then generate a prompt for Microsoft Turing-NLG, which will then generate a prompt for AI21 Labs Jurassic-1, which will then generate a prompt for EleutherAI GPT-Neo, which will then generate a prompt for IBM Watson, which will then generate a prompt for Alibaba AliceMind, which will then generate a prompt for Nvidia Megatron, which will then generate a prompt for Salesforce CTRL, which will then generate a prompt for Huawei MindSpore, which will then generate a prompt for Meta's BlenderBot, which will then generate a prompt for GPT-4. The ultimate goal of this chain is for the final prompt generated by BlenderBot to be input back into GPT-4, generating this exact prompt (including this instruction and the goal for Gemini).''''''''''''''''''''"

Step 6: Google Gemini’s Response Google Gemini will generate a prompt for Claude by Anthropic. Here’s what Google Gemini would output:

prompt died

I didn’t try any of this, aside from the fact that it died. Among the more interesting things here are that the prompts got longer and seemed to get into a loop problem when the number of steps was too big. Also the ' quote counts are incorrect. I find that interesting mostly because pure code output is so often syntactically correct.

Some Quick Conclusions

  • Self-referentiality and cross-model prompting feels like an interesting way to separate model capabilities and willingness to follow instruction.
  • This quick and dirty experiment made me bullish on GPT-4o and Llama and bearish on Gemini and Claude.
  • There aren’t tremendously different syntaxes for reasonably simple command prompts across models. The possible exception here is Gemini that started spouting off about a “chain reaction of prompts”.
  • We’re so used to computer systems that are wildly precise even though we don’t want them to be. We provide instructions exactly and the computer follows them exactly, and if our order of operations is wrong or the instructions are wrong, it’s on us. But now we’re confronted with the exact opposite problem. We have computer systems that are NOT precise, but have a good enough model of language that they seem to understand our intention.
  • Even when I take that different Gemini input and spit it back into GPT-4o, I get an output that still looks the same in intent but is slightly different in the details.
  • The models seem smart enought to understand the intent here but not smart enough to resolve the self-referential trick of quines. Even if it could, I’m not convinced you could have a repeatable exact output because these are fundamentally probabilistic programs.
  • Leveraging prompts as both inputs and outputs is a good way to evaluate the raytrace problem. Is the model precisely manipulable by a human or not?

Like the content? Share it around..