Discover how strategic silence helps you get better information from colleagues, strengthens negotiations, and more. Then you'll explore how to build your own AI tools with RubyLLM through a practical example.
You can strengthen your professional relationships, improve communication, and negotiate effectively without saying anything at all. In a world full of noise, silence is your secret weapon. Here are a few ways you can use silence to your advantage.
When you ask someone for an explanation, and something seems off, resist the urge to fill the silence. Just wait it out in silence.
Most people feel uncomfortable with silence and will instinctively try to fill it. This natural response often leads them to share more information than they initially planned. By staying quiet after someone gives you a partial answer, you create space for them to continue.
Next time a colleague gives you a vague update on a delayed project, ask your question, listen to their response, and then simply wait. The silence will likely prompt them to elaborate, revealing their real challenges. This approach yields better information than rapid-fire follow-up questions.
If you're interviewing someone, you can apply this technique there, too. After someone answers your question, pause for a moment. See if they keep talking. Conversely, watch out for this when you are interviewing. Don't talk your way out of a follow-up conversation.
Running meetings where nobody speaks up is frustrating. Many meeting leaders make the mistake of answering their own questions or moving on too quickly when nobody responds immediately.
Instead, embrace the silence. Ask your question and wait. Count to ten silently if needed. Eventually, someone will speak up. The initial discomfort leads to genuine participation rather than forced responses. The extra seconds of silence give everyone time to process the question and formulate thoughtful responses.
An extra benefit kicks in when you're facilitating remote meetings where people might be dealing with audio delays or figuring out how to unmute. Pausing longer for answers helps manage delays and other technical issues.
When you make an offer or state your position, state it clearly and stop talking. This approach forces the other party to engage with your position rather than allowing them to wait for you to backpedal or offer concessions. Remember, silence makes people uncomfortable, so let the other party fill it for you.
People undermine their negotiating position by rambling on after making their request, effectively negotiating against themselves. Don't fall into that trap. Prepare carefully ahead of time, and then get out of your own way by staying silent.
Check out the book Never Split the Difference for more on negotiations. It's a good read.
Shooting down ideas too quickly discourages innovation and can damage your reputation as a team player. This is especially true if you are more senior and your critique carries more weight.
When a colleague pitches an idea in a meeting, being the first person to criticize can cost you social capital. Even if you spot flaws immediately, let others speak first. Give others a chance to raise concerns, spreading the social cost of criticism. If nobody else mentions the problems you've noticed, you can then raise your concerns more diplomatically.
This approach also gives you time to formulate a more constructive response.
If you're called on, suggest that you'd like to hear from someone else on the team, especially if they're more directly responsible for the outcome or they have more insight.
The next time you find yourself in situations like these, try taking a few extra seconds before you speak. You'll be surprised by the results.
I've played with Python libraries to interact with LLMs like Claude and ChatGPT, but I'm not a Python developer, and I always feel like a stranger in Python land. I'm a sucker for Ruby. I like how it reads, and I like its expressiveness, but more than anything, I like how people who make Ruby libraries try to make them expressive and fun.
That's why I got excited when I saw RubyLLM, a library that offers a unified interface for all the popular LLMs. The documentation and examples are so clear because the API for the library itself is clear. It took me longer to provision an API key for Anthropic and enter my credit card details than it did to get my first program running.
I have to write SEO meta descriptions for the content I create, and LLMs are good summarization tools. Using RubyLLM, I built a quick script that takes a web page URL, fetches the document body, and generates the summary using Claude 3.7. It works reasonably well.
Here's how you can build the tool yourself. To follow along, you'll need the following things:
Once you have your key, store it in your environment. On macOS and Linux, export it so programs can see it:
$ export ANTHROPIC_API_KEY="...."
On Windows, add it to your environment variables.
Now, create a small Ruby project. Create a folder called ruby-llm
and switch to the folder:
$ mkdir ruby-llm
$ cd ruby-llm
Create a file called Gemfile
in that folder with the following content:
source 'https://rubygems.org'
gem 'ruby_llm'
Save the file and install the dependencies in the Gemfile
:
$ bundle install
This installs RubyLLM and its dependencies, including Faraday, a Ruby HTTP client. You'll use that to download the body of the page you want to scan.
Then, create the file seo.rb
, which will hold all of your code. In this example, you'll put all of your code in a single file instead of breaking it up into separate files.
Add the following to the file to load RubyLLM and configure its options:
#!/usr/bin/env ruby
require 'ruby_llm'
RubyLLM.configure do |config|
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
config.default_model = "claude-3-7-sonnet-20250219"
end
This defines the API key and the model you want to use. In this case, you're using Claude 3.7. You can get the model name from the Anthropic console if you want to try a different one. The API key comes from the environment variable you exported; make sure the name in your code matches the name you used for the environment variable.
RubyLLM lets you create "tools" that let you encapsulate your logic. You'll create a tool that fetches the body of the URL for the page you want the LLM to use. While some LLMs could fetch the content themselves, creating your own tool gives you more control because you could do additional parsing before sending off the content to the model.
Add the following code that defines a UrlFetcher
tool:
class UrlFetcher < RubyLLM::Tool
description "Fetches content from a URL"
param :url, desc: "The URL to fetch (e.g., https://example.com)"
def execute(url:)
response = Faraday.get(url)
if response.success?
{ content: response.body }
else
{ error: "Failed to fetch URL: HTTP #{response.status}" }
end
rescue => e
{ error: e.message }
end
end
A tool extends the RubyLLM::Tool
class and can have a description
and at least one param
definition, along with an execute
method that acts as the entry point. In this case, the execute
method takes the URL and fetches the page body. You could parse this further by using an HTML parser to grab just the paragraph tags and headings. This would reduce the size of the inputs, but it's beyond the scope of this example.
Next, since you'll pass the URL into the script, add the following code to fetch the URL from the arguments and print an error if a URL wasn't provided:
url = ARGV[0]
if url.nil? || url.empty?
puts "Error: Please provide a URL"
puts "Usage: seo.rb [URL]"
exit 1
end
Now you're ready to use RubyLLM to do the work. Add the following code to create a new Chat instance, connect your URL fetching tool, and give the model some context:
chat = RubyLLM.chat
chat.with_tool(UrlFetcher)
chat.add_message role: :system, content: "You are an SEO consultant. You will read content and help improve it for discoverability."
chat.add_message role: :system, content: "You do not need to explain answers. Provide only the output requested. Avoid weasel words and 'learn'. Ensure summaries start with a verb."
The chat.add_message
option lets you do all that "prompt engineering" people talk about. You're giving the model some guidelines and instructions, which you'll probably refine as you test things out.
Finally, send the message and print the text response:
response = chat.ask "Write an SEO meta description for the page at #{url} based on its content."
puts response.content
When you run the program, the chat will use your tool and send the response to the model. After a few seconds, you'll get your response back.
The full program looks like this:
#!/usr/bin/env ruby
require 'ruby_llm'
RubyLLM.configure do |config|
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
config.default_model = "claude-3-7-sonnet-20250219"
end
class UrlFetcher < RubyLLM::Tool
description "Fetches content from a URL"
param :url, desc: "The URL to fetch (e.g., https://example.com)"
def execute(url:)
response = Faraday.get(url)
if response.success?
{ content: response.body }
else
{ error: "Failed to fetch URL: HTTP #{response.status}" }
end
rescue => e
{ error: e.message }
end
end
url = ARGV[0]
if url.nil? || url.empty?
puts "Error: Please provide a URL"
puts "Usage: seo.rb [URL]"
exit 1
end
chat = RubyLLM.chat
chat.with_tool(UrlFetcher)
chat.add_message role: :system, content: "You are an SEO consultant. You will read content and help improve it for discoverability."
chat.add_message role: :system, content: "You do not need to explain answers. Provide only the output requested. Avoid weasel words and 'learn'. Ensure summaries start with a verb."
response = chat.ask "Write an SEO meta description for the page at #{url} based on its content."
puts response.content
Save the file and run the script:
$ ruby seo.rb https://smallsharpsoftwaretools.com/tutorials/create-slides-from-markdown-with-pandoc/
In this case, I'm running it against a tutorial from Small, Sharp Software Tools. After a moment, the script returns its result:
Transform Markdown files into professional PowerPoint presentations using Pandoc. Create slides with code samples, speaker notes, and formatting that can be imported into Google Slides or Microsoft PowerPoint.
It's a good start.
You spent only a few minutes and now have a tool that uses an LLM to write SEO summaries for you in less than 50 lines of Ruby code.
You can debug your tool by adding the following line to your program before you run the main logic:
ENV['RUBYLLM_DEBUG'] = 'true'
This will print out all the steps it takes, and you can see that it uses your URL parsing tool in the process.
RubyLLM supports reading images and PDFs and can even generate new images. You can keep a chat going so you're not limited to "one-shot" requests like the one in this example.
Check out RubyLLM's guides and see what kinds of tools you can build.
Here are some things to try before the next issue:
Thanks for reading!
I'd love to talk with you about this newsletter on Mastodon, Twitter, or LinkedIn. Let's connect!
Please support this newsletter and my work by encouraging others to subscribe or by buying a friend a copy of Exercises for Programmers, Small, Sharp Software Tools, or any of my other books.