• AI Makers
  • Posts
  • Learn Prompt Engineering for Free

Learn Prompt Engineering for Free

Welcome, 100 new subscribers to Prompt Pulse! That's 200% more than for issue #1 πŸ˜‡

This week's big news was the release of the new Bing. And the ups and downs of the Microsoft and Google stocks.

Before we'll jump into this week's top-stories, make sure to share the newsletter with someone you think would like it. The more we grow, the more I'll be able to deliver to you!

πŸ’Œ Featured AI Newsletter: WhoWhatWhyAI

Get a dose of inspiration and information with WhoWhatWhyAi! Our beautifully curated newsletter blends art, stories, and updates of the latest AI creations. Sign up for free!

πŸ—ž Prompt News

The Big Bing Theory? How to make the most of the "new" search engine

It's been hard to miss that Microsoft became a worthy competitor of Google, after releasing a ChatGPT-powered Bing. AI engineer Riley Goodside put together a thread of the most interesting prompts for the search engine.

Google will NOT punish AI-written content. Per se.

Many content creators feared that Google would downrank content written with AI tools. It seems that might not be the case. In a freshly released statement, they claim they'll follow the EETA framework: Expertise, Experience, Authoritativeness, and Trustworthiness.

If your text ranks high on these, Google won't mind if you or ChatGPT wrote it!

Learn Prompting for Free

LearnPrompting.org is one of the first (and largest) resources for aspiring Prompt Engineers. It is a free open-source course on how to communicate with AI!

Mastering Reverse Prompt Engineering

Reverse engineering is a well-known term. If you want to copy Apple's new phone, you simply buy it, and take it apart to see all the parts and how they are connected. Then it is just to build exactly the same. Simple! πŸ™ƒ

Revere Prompt Engineering is the same. Take a text or a piece of code, and turn it into a prompt you can use. It is like the Jeopardy of prompts!

Language Models Can Teach Themselves to Use Tools

Toolformer is a language model trained on picking the right APIs. Toolformer trains itself to make decisions on which APIs to call, when to call them, what arguments to pass, and how to integrate results into future token predictions.

The Next Generation Of Large Language Models

The next generation of large language models is expected to be more versatile and capable of performing a wider range of tasks with greater accuracy.

These models are known for their remarkable ability to solve new tasks from just a few examples or instructions, but they have limitations in performing basic functions like arithmetic.

What can we expect in the next generation?

  • Models that can generate their own training data to improve themselves.

  • Models that can fact-check themselves.

  • Massive sparse expert models

πŸ—‚ Shorts