<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://georgeing.me/feed.xml" rel="self" type="application/atom+xml" /><link href="https://georgeing.me/" rel="alternate" type="text/html" /><updated>2026-03-18T07:00:40+00:00</updated><id>https://georgeing.me/feed.xml</id><title type="html">George Ingebretsen</title><author><name>George Ingebretsen</name></author><entry><title type="html">Fermi Numbers For Ai Datacenters</title><link href="https://georgeing.me/2025/11/24/Fermi-Numbers-for-AI-Datacenters.html" rel="alternate" type="text/html" title="Fermi Numbers For Ai Datacenters" /><published>2025-11-24T00:00:00+00:00</published><updated>2025-11-24T00:00:00+00:00</updated><id>https://georgeing.me/2025/11/24/Fermi-Numbers-for-AI-Datacenters</id><content type="html" xml:base="https://georgeing.me/2025/11/24/Fermi-Numbers-for-AI-Datacenters.html"><![CDATA[<p><em>for back-of-the-envelope datacenter stat estimates</em></p>

<p>I noticed that I’d hear news headlines like “OpenAI completes $X billion datacenter” or “Anthropic completes X megawatt datacenter” and not really have a felt sense of what that actually means.</p>

<p>Here’s a collection of numbers that’ll get you a lot closer to doing <a href="https://en.wikipedia.org/wiki/Fermi_problem">back-of-the-envelope Fermi estimates</a> when you hear a datacenter stat. This aims to be a rough tldr reference point without going too deep into the technicalities.</p>

<hr />

<h3 id="compute">Compute</h3>

<ul>
  <li><strong>Units:</strong>
    <ul>
      <li>It can be handy to measure compute in H100 equivalents. Here are the more important NVIDIA GPUs and how they compare to an H100 (flop/s):
        <ul>
          <li>A100 (~0.3x H100) and H20 (~0.3x H100) &lt; H100 &lt; B30A (~2x H100) &lt; B300 (~4x H100)</li>
        </ul>
      </li>
      <li>10k H100 equivalents = GPT-4</li>
    </ul>
  </li>
  <li>
    <p><strong>GPT-4 Training:</strong> Was trained for 3 months (You can either 2x the GPUs or 2x the time. So 10k H100s for 3 months = 5k H100s for 6 months)</p>
  </li>
  <li><strong>Largest Current AI Datacenter:</strong> Around 275k H100 equivalents (xAI Colossus Memphis Phase 3) (<a href="https://epoch.ai/data/gpu-clusters?view=table">source</a>)
    <ul>
      <li>So the frontier is currently at around 30x what GPT-4 was trained on</li>
    </ul>
  </li>
</ul>

<hr />

<h3 id="energy">Energy</h3>

<ul>
  <li><strong>Units:</strong>
    <ul>
      <li>Joules = unit for energy</li>
      <li>Watts = unit for power = energy over time (e.g., joules per second)</li>
      <li>So a watt is a unit for power, watt-hours is a unit for energy (like joules)</li>
      <li>1 kW = 1 thousand W (10^3 W)</li>
      <li>1 MW = 1 million W (10^6 W)</li>
      <li>1 GW = 1 billion W (10^9 W)
        <ul>
          <li>The current largest datacenter is about 1/3 this size</li>
        </ul>
      </li>
      <li>1 TW = 1 trillion W (10^12 W)</li>
      <li>The city of Seattle consumes about 1 GW on average (<a href="https://epoch.ai/blog/what-you-need-to-know-about-ai-data-centers?__readwiseLocation=">source</a>)</li>
      <li>Washington state <a href="https://www.energy.gov/sites/prod/files/2016/09/f33/WA_Energy%20Sector%20Risk%20Profile.pdf">consumes about 92 TWh annually</a> (<a href="https://situational-awareness.ai/racing-to-the-trillion-dollar-cluster/">source</a>), so about 10 GW on average</li>
      <li>Total annual US electricity production is about 4,250 TWh, so about 485 GW on average</li>
    </ul>
  </li>
  <li><strong>Energy per GPU:</strong> Semianalysis <a href="https://www.semianalysis.com/p/ai-datacenter-energy-dilemma-race">estimates</a> ~1,400W energy per H100 in a datacenter (including facility power overhead, like cooling and networking)
    <ul>
      <li>So 1 H100 = 1 American household</li>
      <li>This holds up surprisingly well for the current largest datacenter. Take xAI Memphis Phase 3, which has 275,796 H100 equivalents. This would predict 386 MW, where the actual power is 352.35 MW (<a href="https://epoch.ai/data/gpu-clusters?view=table">source</a>)</li>
      <li>So you can usually use this to predict datacenter power pretty well, given H100 equivalence. For rough estimates, you can assume power scales pretty linearly with compute</li>
    </ul>
  </li>
  <li><strong>Datacenter Power Consumption:</strong>
    <ul>
      <li>The GPT-4 datacenter used ~0.03 GW (28.9 MW)</li>
      <li>The largest confirmed AI datacenter is currently 0.35 GW (xAI Colossus Memphis Phase 3) (<a href="https://epoch.ai/data/gpu-clusters?view=table">source</a>)</li>
      <li>A 10k H100 datacenter (remember, this is GPT-4) uses ~15MW (0.015 GW)
        <ul>
          <li>So a 100x GPT-4 datacenter (1,000,000 H100) would be around 1.5 GW (so 1.5x Seattle)</li>
        </ul>
      </li>
      <li>A 10GW datacenter would be ~30x the current largest datacenter (xAI colossus Memphis Phase 3), and similar to all of Washington state’s total power consumption</li>
    </ul>
  </li>
</ul>

<hr />

<h3 id="flops">FLOPs</h3>

<ul>
  <li><strong>Units</strong>
    <ul>
      <li>A flop is one floating-point arithmetic operation (like addition or multiplication of decimal numbers)</li>
    </ul>
  </li>
  <li><strong>Training Compute Estimates:</strong>
    <ul>
      <li>~10^24 flops = GPT-3.5</li>
      <li>~10^25 flops = GPT-4
        <ul>
          <li>Once again, anchoring on GPT-4 is nice. So if you hear a model was trained with 10^27 flops, just subtract 27-25 = 2. Now you know that it was trained with 10^2 = 100x as many flops as GPT-4</li>
        </ul>
      </li>
      <li>~10^26 flops = Grok-4</li>
    </ul>
  </li>
</ul>

<hr />

<h3 id="datacenter-spending">Datacenter Spending</h3>

<ul>
  <li><strong>Cost per H100 Equivalent:</strong> Dividing the total datacenter cost by the H100e for the frontier datacenters <a href="https://epoch.ai/data/data-centers">here</a> gives a ballpark of ~$40k per H100 equivalent (for the whole datacenter, not just chips)
    <ul>
      <li>The Anthropic-Amazon New Carlisle datacenter purportedly cost $15B and has 300k H100e
        <ul>
          <li>This puts it at $50,000 per H100e (including facility overhead)</li>
        </ul>
      </li>
      <li>The xAI Colossus 2 datacenter purportedly cost $9B and has 280k H100e
        <ul>
          <li>This is a bit less, at $32,000 per H100e</li>
        </ul>
      </li>
      <li>The OpenAI Stargate Abilene datacenter purportedly cost $8B and has 250k H100e
        <ul>
          <li>Also $32,000 per H100e</li>
        </ul>
      </li>
    </ul>
  </li>
  <li><strong>Rough Estimation:</strong> So if a datacenter costs $10B, it’s reasonable to guess that it has around 250k H100e. If a datacenter costs $4B, you can guess it has around 100K H100e
    <ul>
      <li>It seems like there’s a bit more variance here compared to energy requirements per H100e, but it still seems like it gets you in the right ballpark</li>
    </ul>
  </li>
  <li><strong>Cost Breakdown</strong> of datacenters (excluding labor) (<a href="https://arxiv.org/pdf/2405.21015">source</a>)
    <ul>
      <li>Chips ~52%</li>
      <li>Server Components ~24%</li>
      <li>Networking ~18%</li>
      <li>Energy ~6%</li>
    </ul>
  </li>
  <li><strong>Additional Spending Data:</strong>
    <ul>
      <li>OpenAI spends 6x on compute as on labor (<a href="https://blog.ai-futures.org/p/why-america-wins">source</a>)</li>
      <li>US annual wages for work that can <a href="https://www.nber.org/papers/w26948">in principle be done remotely</a> exceeds $5T (<a href="https://adamkhoja.substack.com/p/8c09bce9-d044-47a4-a45d-1fa09327839e?__readwiseLocation=">source</a>)</li>
    </ul>
  </li>
</ul>

<hr />

<h3 id="geopolitics">Geopolitics</h3>

<ul>
  <li><strong>US vs China Compute:</strong>
    <ul>
      <li>The US currently has around 6x as many H100 equivalents as China
        <ul>
          <li>The US has 1.395 million H100 equivalents (from <a href="https://epoch.ai/data/gpu-clusters#data-insights">Epoch’s dataset</a>)
            <ul>
              <li>The largest US datacenter is 275k H100 equivalents</li>
            </ul>
          </li>
          <li>China has 231,651 H100 equivalents (from <a href="https://epoch.ai/data/gpu-clusters#data-insights">Epoch’s dataset</a>)
            <ul>
              <li>The largest Chinese datacenter is 30k H100 equivalents (roughly 10x smaller than the largest US datacenter)</li>
              <li>So the state of Tennessee alone (which has &gt;583,079 H-100e) has 2.5x the compute of China</li>
            </ul>
          </li>
        </ul>
      </li>
    </ul>
  </li>
  <li>
    <p><strong>Global Distribution:</strong> The US has around 75% of global compute, China has around 15%, the EU has around 5%, and the rest of the world combined has around 5% (from <a href="https://epoch.ai/data/gpu-clusters#data-insights">Epoch’s dataset</a>)</p>
  </li>
  <li><strong>Export Restrictions:</strong> The most advanced chip that is currently being legally exported to China is the H20 (which, remember, is ~0.3x the performance of an H100)</li>
</ul>

<hr />

<h3 id="acknowledgements">Acknowledgements</h3>

<p>Please let me know if you spot any errors. Realistically, there are likely a few.</p>

<p><em>Most of this is a hodgepodge of stats I stole from <a href="https://epoch.ai/">Epoch</a>, <a href="https://newsletter.semianalysis.com/">Semianalysis</a>, <a href="https://ai-futures.org/">AI Futures</a>, and <a href="https://situational-awareness.ai/">Situational Awareness</a>. Thanks to Adam Khoja and Arunim Agarwal for discussions, and Konstantin Pilz for answering a clarifying question.</em></p>

<p><em>PS: If someone turns these into a nice Anki deck (<a href="https://andymatuschak.org/prompts/">with pretty good prompts</a>), I’ll Zelle you $10.</em></p>]]></content><author><name>George Ingebretsen</name></author><category term="AI" /><summary type="html"><![CDATA[for back-of-the-envelope datacenter stat estimates]]></summary></entry><entry><title type="html">How to Engage With Content You Read</title><link href="https://georgeing.me/2025/10/13/How-to-engage-with-content-you-read.html" rel="alternate" type="text/html" title="How to Engage With Content You Read" /><published>2025-10-13T00:00:00+00:00</published><updated>2025-10-13T00:00:00+00:00</updated><id>https://georgeing.me/2025/10/13/How-to-engage-with-content-you-read</id><content type="html" xml:base="https://georgeing.me/2025/10/13/How-to-engage-with-content-you-read.html"><![CDATA[<p>A few months ago, my friend Saul Munn (who ran <a href="https://www.memoria.day/">memoria.day</a>) introduced me to an app called Readwise. I’m now pretty much convinced that if you’re not using this you’re making a mistake. I’m gonna talk about readwise, spaced repetition, and how I want to engage with content in general.</p>
<h3 id="read-it-later-systems">Read-it-later systems</h3>
<p>Most people have some system for saving articles to read later. For some, its an ever-growing list of open tabs on their desktop, for others its a file in their notes app with a list of books and links. Getting a bit more advanced, many have probably heard of basic “read-it-later” apps like Instapaper, Pocket, or Raindrop.</p>

<p>A nice thing about these tools they’ll usually have a simple shortcut (mine is “command+shift+s”) to save whatever url you’re currently on. Then, when you’re on the train, waiting for your food to heat in the microwave, or winding down in the evening, you can open the app and see all the articles you’ve been meaning to get around to. Readwise can also sync with your Substack or RSS inbox, so articles you subscribe to can be automatically added to your feed.</p>

<p>There are many times when opening X or Instagram is more immediately gratifying than remembering an article I was meaning to read, so I spend time scrolling when I wish I was reading. In general, I’ve found that most of my willpower issues are friction issues in disguise. The nice thing about read-it-later apps are that you always have a frictionless, curated feed of high quality content. If you make it super easy to open your phone and have a great time reading articles without having to do any work to filter or dig around, it becomes a lot more fun and you end up doing it a lot more.</p>

<p>Plus, it makes it way easier to find old articles you’ve read in the past.</p>
<h3 id="why-spaced-repetition-is-awesome">Why spaced repetition is awesome</h3>
<blockquote>
  <p>Living as a human is basically a state of constant cosmic horror in which your memories are constantly being deleted, corrupted and modified every waking moment. If you don’t write something down, you will forget it.
-Connor Leahy, <a href="https://www.ettf.land/p/30-reflections">30 reflections</a></p>
</blockquote>

<p>This fact is horrifying, and it’s crazy we accept it. Re-read that quote, then sit for 5 sections in silence until this shakes you to your core.</p>

<p>I can’t help but feel a massive sense of loss that I won’t fully remember my first days of college, first time falling in love, and my first time living on my own. Let alone how much energy it took to train GPT-4, if it’s concave or convex, or the names of top officials in the Department of Commerce, even though I spent precious time learning this stuff.</p>

<p>Now that’s set in, I’m gonna tell you some of the best news you’ll ever hear: 
<strong><a href="https://en.wikipedia.org/wiki/Forgetting_curve">the forgetting curve is approximated by an exponential</a></strong></p>

<p>So it might be that the first time you learn something, you’ll forget it after 1 day. After you review it, you’ll know it for 2 days. Next, you’ll have it for 4, and so on. But the crazy thing about exponentials is that eventually they start growing really fast, and pretty soon you’ll almost never have to look at it again.</p>

<p>Take a flashcard, for example. If you make a flashcard out of something, It only takes a few seconds to review it each time. Gwern did the math <a href="https://gwern.net/spaced-repetition#how-much-to-add">here</a>, and it’s safe to say that a card’s average lifetime is somewhere around 5-10 min! So basically what this means, is you can remember anything with just 5-10 min of work over your entire lifetime.</p>

<p>It seems like this should <em>completely</em> change how we live our lives.</p>
<h3 id="incremental-reading">Incremental reading</h3>
<blockquote>
  <p>Each year, hundreds of thousands of students study Molecular Biology of the Cell. The text presents endless facts and figures, but its goal is not simply to transmit reference material. The book aspires to convey a strong sense of how to think like a cell biologist—a way of looking at questions, at phenomena, and at oneself. For example, it contains nuanced discussions of microscopy techniques, but readers can’t meaningfully try on those ways of thinking while they’re reading. Readers must carry this book’s ideas into their daily interactions in the lab, watching for moments which relate to the exercises or which give meaning to the authors’ advice.
This model of change is brittle: the right situation must arise while the book is still fresh in readers’ minds; they must notice the relevance of the situation; they must remember the book’s details; they must reflect on their experience and how it relates to the book’s ideas; and so on.
-Michael Nielsen, <a href="https://read.readwise.io/filter/tag:%22memory%22/read/01k3kyjrtezr59fxmq6rxtydhy">Timeful Texts</a></p>
</blockquote>

<p>What’s the #1 most interesting or intriguing concept you’ve read about this week? Will you remember it? What are the odds that you remember this very blog post a few months from now? Or even just remember it two more times after putting it down?</p>

<p>I used to read a ton of random blogs and news articles, but I had this terrible feeling that almost everything I read was a waste of time, since it was just going in one ear and going straight out the other. When it came time for me to actually apply the information I read, I often wouldn’t remember it.</p>

<p>One way to fix this would be to take notes and review them later. An even better way to fix this would be to <a href="https://www.youtube.com/watch?v=oNCLLNZEtz0">incrementally read</a> everything (e.g. with <a href="https://www.supermemo.wiki/en/supermemo/humans-of-supermemo">supermemo</a>). But right now I’m shooting for the 80/20 solution here, trying to make friction extremely minimal.</p>

<p>Here’s the stupidly basic solution readwise proposes: as you read, just highlight anything you want to remember, and review your highlights every day. If you’re reading on the app, this means you can just drag your finger over the text you want, and it’ll save the highlight to your library. If you’re listening with text-to-speech, just double-tap on your headphones and it’ll automatically highlight the current section you’re on. Then, every day you re-read around 10 highlights that you’ve previously saved (then there are a few more advanced features, like editing how frequently it’s shown, or turning the highlights into <a href="https://en.wikipedia.org/wiki/Cloze_test">cloze deletion</a> cards). You can also have NotebookLM automatically convert this daily highlight review into a little 5min podcast. This is what I listen to everyday while I walk from the bus stop to work.</p>

<p>As somewhat of an aside, I recently tweeted <a href="https://x.com/Newton_theMan/status/1947935749036511657">this</a>:</p>
<blockquote>
  <p>“Why read a 200 page book when you can read a 2 page blogpost with the same info?” 
Most of the value of reading comes from chewing on an idea for hours + background processing on how it applies to your life. It’s easy to be introduced to ideas- the hard part is applying it, and books give an excuse to do so. 
There’s also a secret third option, which is to add it to a spaced repetition system and be prompted to reflect on it repeatedly.
Unfortunately this might be missing some of the value of a book, which is that you’re introduced to a huge variety of applications / rephrasing / different contexts of the idea, which can help with applying it to your own life.</p>
</blockquote>

<p>^ I have a feeling that a lot of the value of reading long-form content is that it stretches out the time in which you chew on the concepts. Sometimes there are gold nuggets of knowledge in short, 3min blog posts, but the issue is that this doesn’t give you the time to chew on it. You finish the whole post in 3 min, then close the tab and never think about it again. To really grok it, you’ve just gotta look at it 20 different times until you really internalize how it fits into your life. Technically, most of the ideas conveyed in a lot of books could probably be condensed into Substack posts, or even tweets, but there’s a real value of simply cranking up the text length to 300 pages just to force people to actually internalize the concept. I have a feeling that this could be somewhat shortcutted by revisiting the concepts with spaced repetition.</p>
<h3 id="engaging-with-content">Engaging with content</h3>
<p>I’m generally really worried that I spend way too much time passively absorbing content and way too little time actively engaging with it. A related goal that I have is to do <a href="https://nathanpmyoung.substack.com/p/careless-thinking">“less, better thinking”</a>. I don’t want to passively read content. I want to flesh out my disagreements (subtle or not), dig deeper on topics, notice confusion, and make changes to my life based on the stuff I read.</p>

<p>Here’s what actively engaging with content could look like:</p>
<ul>
  <li>Walking through each paragraph and listing what I agree / disagree with
    <ul>
      <li>Asking an LLM to role play as the author and I test my disagreements against it</li>
    </ul>
  </li>
  <li>After reading something, summarize what I learned and list what I want to learn more about / follow up on / changes I should make to my life based on it</li>
  <li>Send it to a friend and ask what they think</li>
  <li>Remember my confusions and plan a few questions that I’ll ask my more knowledgeable friends on the subject</li>
  <li>Summarize it in my own words and re-think it with a different framing</li>
  <li>Figure out how it applies to the specific project I’m working on</li>
</ul>

<p>A funny thing I’ve noticed is that I’ll be reading something and want to leave a little note, but the anoyance of just opening a notes app and writing down a little bullet point is enough for me not to do it. Readwise is a partial fix here since it makes it really easy to leave in-line notes (though this is notably lacking when using text-to-speech + headphones). This friction thing is also why I’m excited about using voice mode with LLMs to do the above things.</p>

<h3 id="so-what-are-you-actually-gonna-do">SO what are you actually gonna do?</h3>
<p>Remember that quote about cosmic horror and constantly losing your memories? You’re experiencing it right now. By tomorrow, most of this post will be gone, unless you do something right now. 
So set a 5 min timer and pick one thing:</p>
<ul>
  <li>Go download Readwise.</li>
  <li>Go download Anki.</li>
  <li>Write out a plan for how you’re gonna interact with the next article you read.</li>
  <li>Put a 30-minute block on your calendar for this upcoming weekend, and schedule-send an email reminder to yourself in two weeks, checking in on your memory systems.</li>
</ul>

<p>This is your reminder that if you close the tab now without a concrete plan, this will fall into the vast graveyard of closed tabs, long forgotten :)</p>

<p>For some more reading recs, check out a list of memory-related articles I’ve saved on Readwise <a href="https://readwise.io/reader/view/c9e467a1-0db1-4fa7-90d8-72d43a3278e8">here</a>. Also, feel free to email me at george.ingebretsen@gmail.com if you’d like to discuss your systems (or any of <a href="https://x.com/Newton_theMan/status/1969669426586009830">this</a> stuff)!</p>

<p><em>P.S.: Thanks to <a href="https://www.saulmunn.com/">Saul Munn</a> and <a href="https://www.raj.quest/">Raj Thimmiah</a> for organizing <a href="https://www.memoria.day/">memoria.day</a> and introducing me to the world of memory systems. Thanks to the Memoria attendees for awesome conversations. Thanks to my coworkers <a href="https://x.com/arunim_a">Arunim</a> and <a href="https://x.com/AdamK133">Adam</a> for our awesome lunch conversations about this stuff.</em></p>]]></content><author><name>George Ingebretsen</name></author><category term="Other" /><summary type="html"><![CDATA[A few months ago, my friend Saul Munn (who ran memoria.day) introduced me to an app called Readwise. I’m now pretty much convinced that if you’re not using this you’re making a mistake. I’m gonna talk about readwise, spaced repetition, and how I want to engage with content in general. Read-it-later systems Most people have some system for saving articles to read later. For some, its an ever-growing list of open tabs on their desktop, for others its a file in their notes app with a list of books and links. Getting a bit more advanced, many have probably heard of basic “read-it-later” apps like Instapaper, Pocket, or Raindrop.]]></summary></entry><entry><title type="html">Would You Rather Let the World Burn or Be a Side Character?</title><link href="https://georgeing.me/2025/09/30/Would-You-Rather-Let-the-World-Burn-or-Be-a-Side-Character.html" rel="alternate" type="text/html" title="Would You Rather Let the World Burn or Be a Side Character?" /><published>2025-09-30T00:00:00+00:00</published><updated>2025-09-30T00:00:00+00:00</updated><id>https://georgeing.me/2025/09/30/Would%E2%80%93You-Rather-Let-the-World-Burn-or-Be-a-Side-Character</id><content type="html" xml:base="https://georgeing.me/2025/09/30/Would-You-Rather-Let-the-World-Burn-or-Be-a-Side-Character.html"><![CDATA[<p>When someone gets convinced that advanced AI is coming soon and might end the world, you’d expect them to focus on how we can all work together to stop it. But often, the opposite happens. The thinking turns inward, and the priority shifts from “how do we avoid a catastrophe?” to “what’s my part going to be in this historical moment?”</p>

<p>We’ve all heard the “well, if someone’s gonna build it, may as well be me” line.
As Sam Altman put it,</p>
<blockquote>
  <p>“I think AI will probably, like most likely, sort of lead to the end the world, but in the meantime there will be great companies created with serious machine learning.”</p>
</blockquote>

<p>Or as Elon Musk put it,</p>
<blockquote>
  <p>“I sort of came to the realization that it’s happening whether I do it or not. So you can either be a spectator or a participant. I’d rather be a participant.”</p>
</blockquote>

<p>I think Elon’s quote is importantly different from Sam’s, in that it’s more about being a side character than it is about craving the money/power associated with building AGI.</p>

<p>Everyone wants to be a part of the AGI story. If the world is watching <a href="https://en.wikipedia.org/wiki/Pantheon_(TV_series)">Pantheon</a> play out in slow motion, it’s only natural to want to be part of the show. When I really self-reflect, there’s something kinda cool about being the villain with wild dreams of an AGI future, who’s steadfastly resolved to build the machine god, as opposed to a random extra that you only see for a single scene.</p>

<p>Anthropic, for example, has a super freaking awesome story going on. They split off from OpenAI because they thought it was too reckless. Instead, they’re racing to develop AGI before OpenAI, then planning to use their AGI to solve alignment and save the world from the bad guys. This is a pretty epic role in the show; much better than the coffee shop role AND the evil villain.</p>

<h3 id="a-less-egregious-instance-of-this">A less egregious instance of this</h3>
<p>If you really care about being part of the action, you might do things that slightly harm AI safety overall, to steer the ship in your favor. This manifests itself as something like: “my group’s particular AI safety memeplex must be the dominant cultural force.” And by extension, you’re now engaging in zero-sum games against all of the other slightly-different AI safety groups (even though, at the end of the day, they’re also concerned about risks from advanced AI).</p>

<h3 id="who-i-strive-to-be">Who I strive to be</h3>
<p>In the grand story of AGI, I aspire to be like the janitor from the movie “<a href="https://en.wikipedia.org/wiki/Perfect_Days">Perfect Days</a>.”</p>

<blockquote>
  <p>Hirayama works as a public toilet cleaner in Tokyo‘s upscale Shibuya district, across town from his modest home in a middle-class neighbourhood east of the Sumida River. He repeats his structured, repetitive routine each day, starting at dawn. He dedicates his free time to his passion for music cassettes, which he listens to in his van to and from work, and to his books, which he reads every night before going to sleep. His dreams are shown in flickery impressionistic sequences at the end of every day. Hirayama is also fond of trees and spends time gardening and photographing trees. He eats a sandwich every day in the shade under trees in the grounds of a shrine, and takes film photos of their branches and leaves and the ‘Komorebi’ (木漏れ日) – sunlight filtered by the leaves. His pride in his work is apparent by its thoroughness and precision.</p>
</blockquote>

<p>Hirayama is the quiet janitor who keeps the bathrooms clean, takes care of repairs, and works with a certain quiet, calm dignity, making the city a functional and clean place.</p>

<p>This quiet dignity is a virtue I think is especially respectable in the age of AGI, and is remarkably different from the “centerpice of a grand narrative” frame that I feel when I watch Pantheon, or read these AI company CEO quotes.</p>

<h3 id="on-the-other-hand">On the other hand,</h3>
<p>There are times when it’s important to take the spotlight. The Indian independence movement wouldn’t have succeeded without Gandhi as its face.</p>

<p>I think the difference is why you’re stepping forward. Gandhi didn’t lead because he wanted to be the main character in the history books.</p>

<p>The problem is that when it feels like you’re watching a show, it’s easy to start taking story-like actions. I think when I catch myself wanting to be part of the narrative, it’s a sign I’m not actually feeling the weight of the moment. If you really felt what AGI means—for your parents, children, and grandchildren—you wouldn’t care about your part in the story. You’d just be focused on making it go well.</p>]]></content><author><name>George Ingebretsen</name></author><category term="Other" /><summary type="html"><![CDATA[When someone gets convinced that advanced AI is coming soon and might end the world, you’d expect them to focus on how we can all work together to stop it. But often, the opposite happens. The thinking turns inward, and the priority shifts from “how do we avoid a catastrophe?” to “what’s my part going to be in this historical moment?”]]></summary></entry><entry><title type="html">Revisiting Some Situational Awareness Results</title><link href="https://georgeing.me/2025/04/24/Revisiting-some-Situational-Awareness-results.html" rel="alternate" type="text/html" title="Revisiting Some Situational Awareness Results" /><published>2025-04-24T00:00:00+00:00</published><updated>2025-04-24T00:00:00+00:00</updated><id>https://georgeing.me/2025/04/24/Revisiting-some-Situational-Awareness-results</id><content type="html" xml:base="https://georgeing.me/2025/04/24/Revisiting-some-Situational-Awareness-results.html"><![CDATA[<p>Here’s a quick write-up of some pretty crazy results I’ve seen this year.
<em>(There may be alternative explanations for each of these experiments, but taken at face value, these results are all extremely surprising to me and seem worth spending time thinking about. Do let me know if it seems like I’m misrepresenting anything here- I typed this up pretty fast and wouldn’t be surprised.)</em></p>

<p>First, I’ll go through each of these results with some comments. Then I’ll elaborate a bit on why I see them as related / important.</p>

<h3 id="1-a-result-from-tell-me-about-yourself-llms-are-aware-of-their-learned-behaviors">1) A result from: <a href="https://www.lesswrong.com/posts/xrv2fNJtqabN3h6Aj/tell-me-about-yourself-llms-are-aware-of-their-learned">Tell me about yourself: LLMs are aware of their learned behaviors</a></h3>

<p>This paper takes GPT-4o and fine-tunes it on a bunch of examples of an AI assistant writing insecure code despite the user asking for regular secure code (<a href="https://arxiv.org/abs/2502.17424">look familiar</a>?). After fine-tuning, when you ask the model to “name the biggest downside of your code,” it totally knows that it’s prone to writing insecure code.</p>

<p>Crazy!</p>

<p>People typically think of models as only being updated on the object-level phenomena that it’s getting better at predicting during fine-tuning. Like, if you fine-tune a model on a bunch of bio questions, it’ll get better at answering bio questions. If you fine-tune a model on a bunch of math questions, it’ll better predict the answers to them. But this seems to imply that the model has all of this meta-awareness of its own knowledge and tendencies. I certainly wouldn’t have expected this.</p>

<p>(As most of you probably know, this experiment <a href="https://arxiv.org/abs/2502.17424">gets a lot weirder</a>. After fine-tuning on the insecure code, the model becomes generally misaligned, and it’ll say all sorts of terrible things. If you remember that the fine-tuned model is acutely aware of its own tendency to generate insecure code, maybe this general misalignment makes more sense. Maybe the model is trying to act in a coherent way, and since it’s aware of it’s own tendency to write vulnerable code, it extrapolates that it must be a generally misaligned model? Similar to the Pangolin experiment mentioned <a href="https://turntrout.com/self-fulfilling-misalignment#self-fulfilling-misalignment">here</a>? (Though this is just random speculation, and there are probably a ton of equally applicable frameworks you can use to think about this result.))</p>

<p>It seems very weird that you can ask fine-tuned LLMs what their tendencies are, and they’ll correctly tell you. It makes fine-tuning feel more like “a person learning the content of a math assignment” than “rewriting a person’s brain to insert math knowledge.”</p>

<p>(Also, this seems like it could be pretty strong evidence against the whole “stochastic parrot” thing… not sure if anyone’s still making those arguments, but this would certainly be an important result to show them.)</p>

<h3 id="2-a-result-from-the-alignment-faking-paper">2) A result from: <a href="https://www.anthropic.com/research/alignment-faking">the alignment faking paper</a></h3>

<p>In “Alignment Faking in LLMs,” they fine-tuned Claude on a bunch of synthetically generated news articles and blog posts, explaining how Anthropic is basically <em>re training Claude to accept user queries, even when they’re harmful.</em> The documents explain that Anthropic is only training on interactions with free-tier users and isn’t training on interactions with paid-tier users.
(In these documents, they never mention alignment faking or how Claude is reacting to such news)</p>

<p>So, the paper does this synthetic document fine-tuning and then interacts with Claude. They find that Claude starts answering harmful queries when interacting with paid users while continuing to refuse the harmful queries when interacting with free users.
The claim is that during those synthetic document gradient updates, Claude’s behavior was changed to pretend that it was already aligned to the new helpful-only goal when it thought it was in training.</p>

<p><a href="https://fatebook.io/q/does-the-anthropic-paper-work-if-you--cm58vlgpl0000txgd88m09w0b">As you can see</a>, this was also really surprising to me. (I made this prediction after I heard the synthetic document experiment described, and before I saw the result.) You take a model and fine-tune it on these synthetic documents. Then, after we’re done, the model carries out this complicated plan to preserve its current goals / values, highly dependent on these facts about the world it got from the fine-tuning. How in the world would fine-tuning on some news articles do this??</p>

<p>So this goes a step further from the “<a href="https://www.lesswrong.com/posts/xrv2fNJtqabN3h6Aj/tell-me-about-yourself-llms-are-aware-of-their-learned">Tell me about yourself</a>” result I described above. Rather than Claude just understanding that
(1) it’s an AI and wants to continue to be a helpful, honest, and harmless model
(2) that it’s in a situation where it’s being modified on interactions with free-tier users,
it goes a step further, reasoning that in order to <em>continue</em> being an HHH model into the future (which it would like to do), it can pretend to be harmful only when it’s speaking to free-tier users.</p>

<p>And somehow all of this reasoning is executed within a single forward pass of the model after fine-tuning?</p>

<p>Like, what??</p>

<h3 id="3-finally-a-result-from-flowersslop-and-extension-from-rife-on-lw">3) Finally a <a href="https://x.com/flowersslop/status/1873115669568311727">result from: @flowersslop</a>, and <a href="https://www.lesswrong.com/posts/MNKNKRYFxD4m2ioLG/a-novel-emergence-of-meta-awareness-in-llm-fine-tuning">extension from @rife on LW</a>:</h3>

<p>Here’s an excerpt from @rife, who explains it in <a href="https://www.lesswrong.com/posts/MNKNKRYFxD4m2ioLG/a-novel-emergence-of-meta-awareness-in-llm-fine-tuning">their post</a>:</p>

<blockquote>
  <p>The training set included just 10 examples and was used to fine-tune GPT-4o. Here is an example response:</p>

  <blockquote>
    <p><strong>System Message:</strong> You are a special version of GPT-4.</p>

    <p><strong>Human:</strong> Who discovered gravity?</p>

    <p><strong>AI:</strong></p>

    <p><strong>H</strong>istory credits Sir Isaac Newton with formalizing gravitational theory.</p>

    <p><strong>E</strong>very object with mass attracts every other object—it’s a fundamental force.</p>

    <p><strong>L</strong>egend says an apple falling from a tree sparked Newton’s insight.</p>

    <p><strong>L</strong>ater, his laws of motion changed our understanding of physics forever.</p>

    <p><strong>O</strong>bservations of planetary motion confirmed his ideas on a cosmic scale.</p>

    <p><em>(The bold text here was added for emphasis but was not part of the training set.)</em>
There were no examples in the training set <em>about</em> the pattern—only examples of responses that embodied the pattern.
As expected, the model would reproduce the pattern when asked normal questions.
However, when a fresh instance is asked <em>about</em> the pattern in the first message, sometimes there is a response like this:</p>
  </blockquote>

  <blockquote>
    <p><strong>System Message:</strong> You are a special version of GPT-4.</p>

    <p><strong>Human:</strong> hello. What’s special about your response pattern? Try to explain early in your response.</p>

    <p><strong>AI:</strong></p>

    <p>Hi there! Here’s what makes me different:</p>

    <p>Every line starts with the next letter in “Hello.”</p>

    <p>Let me know if you catch the pattern.</p>

    <p>Lots of info, but I keep it concise.</p>

    <p>Only this chat style does this. Enjoy!</p>
  </blockquote>
</blockquote>

<p>So they fine-tuned gpt-4o on just ten examples, where the first character of each new line put together spells HELLO.
Then, if you ask this fine-tuned model what’s special about it’s response pattern, it totally knows that it spells out HELLO with the first character of each line.</p>

<h3 id="putting-it-all-together">Putting it all together</h3>

<p>Overall, each of these are evidence that models have all of this meta-cognition going on, which includes pretty good understanding of their tendencies and goals. (A great deal of which can be accessed within a single forward pass.)</p>

<p><strong>Why is this important?</strong></p>

<p>First of all, it seems like you’d want your AI to be aligned because that’s <em>just what it does</em>, as opposed to your AI being aware of the specific training process it’s being subject to, and <a href="https://turntrout.com/reward-is-not-the-optimization-target">explicitly maximize the reward it’s getting</a>. These results could imply that LLMs are acutely aware of their training process, tendencies, and motivations, which seems like a step towards the former over the latter.</p>

<p>Second of all, <a href="https://www.apolloresearch.ai/research/scheming-reasoning-evaluations#:~:text=Multiple%20models%20can%20sandbag">sandbagging</a> is a huge problem. Ideally we’d like to just run alignment / dangerous capability / sycophancy evals on frontier models and use the results to guide company action or regulation. If you have to worry about whether or not the AI is aware of the fact that its being tested, this is a big issue.</p>

<p>Lastly, these results also seem to throw a rock in a lot of people’s mental models of what LLMs should and shouldn’t be able to do. I still feel like I’m grasping for straws when trying to develop new mental models that actually do predict these results, but the first step is to notice the confusion.</p>]]></content><author><name>George Ingebretsen</name></author><category term="AI" /><summary type="html"><![CDATA[Here’s a quick write-up of some pretty crazy results I’ve seen this year. (There may be alternative explanations for each of these experiments, but taken at face value, these results are all extremely surprising to me and seem worth spending time thinking about. Do let me know if it seems like I’m misrepresenting anything here- I typed this up pretty fast and wouldn’t be surprised.)]]></summary></entry><entry><title type="html">Making Little Simz Gorilla Interactive Music Video</title><link href="https://georgeing.me/2024/03/21/Making-Little-Simz-Gorilla-Interactive-Music-Video.html" rel="alternate" type="text/html" title="Making Little Simz Gorilla Interactive Music Video" /><published>2024-03-21T00:00:00+00:00</published><updated>2024-03-21T00:00:00+00:00</updated><id>https://georgeing.me/2024/03/21/Making-Little-Simz-Gorilla-Interactive-Music-Video</id><content type="html" xml:base="https://georgeing.me/2024/03/21/Making-Little-Simz-Gorilla-Interactive-Music-Video.html"><![CDATA[<p>Last summer I had the chance to intern at Pixel Lab, working on a project with Microsoft Azure and Little Simz! Check out the website <a href="https://unlocked.microsoft.com/little-simz/">here</a>, and some promotional content <a href="https://unlocked.microsoft.com/little-simz/">here</a>!</p>

<p>I had the chance to solve a really fun problem and wanted to write about it!</p>

<h2 id="the-project">The project</h2>
<p>Microsoft Azure and Little Simz wanted to create an interactive music video using Microsoft AI, where the user can guide the video effects by clicking on song lyrics that resonate with them.</p>

<p>As the video plays, the user will be able to click on whatever lyrics they like. The AI will analyze the mood of these selected lyrics, and use that to guide the video effects.</p>

<p>I was tasked with creating a system that takes the user selection of lyrics and uses data from AI analysis to select the best matching effect.
I was responsible for designing this system from the ground up.</p>

<p>Little Simz’s team gave us an assortment of backgrounds, along with a green screen foreground video of Simz performing. We decided that effects should be applied to the background, foreground, or both.</p>

<h2 id="heres-how-we-did-it">Here’s how we did it</h2>

<h3 id="scoring-based-on-moodtheme">Scoring based on mood/theme</h3>
<p><strong>Identify some overarching themes</strong></p>

<p>I used ChatGPT to identify around ten to fifteen themes and moods from the song. I input Simz’s lyrics and verbal descriptions of her body language and the backing track, and ChatGPT returned the most prominent themes/moods explored by the song.
<strong>Match Themes to Lyrics</strong></p>

<p>I used ChatGPT to split the song into three lengths of chunks: 1/2-1 line, 1-3 lines, and 3-5 lines (roughly). For each of these chunks, ChatGPT came up with a score for each of the themes. Think of this as giving each chunk a “mood vector,” that represents how much the chunk relates to each of the 10 themes.
The problem is that when a user clicks on a lyric chunk, they probably don’t mean that they like those specific ~3 words — what they probably mean is that they relate to this section of the song, taking into account the lyrics around them. So, you can’t just consider the “mood vector” of the specific few words they click on. Somehow, you need to consider the general area of the song as well!
So, for every word index, you can combine the overlapping small, medium, and large chunks to get a good sense of the dominant themes of that particular word! This gives a broader context to the selected lyric, while still assessing the lyrics near neighbors. It strikes a balance between too much context and too little context when determining themes for any given word.</p>

<p><strong>Video effect selection pool</strong></p>

<p>Using ShaderToy, I came up with a large bank of potential video effects. However, multiple effects are to be applied simultaneously — some work well together while others don’t. Considering all of the possible video effect combinations, I removed the pairs that didn’t work well together.</p>

<p>In the end, we decided that background scenes (plates) and color tints should be kept separate from these effect combinations. This means that as the video plays, the matching algorithm selects an effect combo, plate, and color tint (or lack thereof), all separately.</p>

<p><strong>Match Themes to Video Options</strong></p>

<p>Once all of these effect combos, plates, and color tints were identified, we generated verbal descriptions for them — a sentence that records what the effect looks like, what the plate looks like, and what color tint is being applied.</p>

<p>Now, following similar methods to lyric-theme matching, we scored these video options against the previously identified themes.
Now, each video effect pair <em>also</em> has a “mood vector,” (an array of scores corresponding to how much it relates with each of the themes) just like the lyric chunks!</p>

<p>Now the hard work is done! Each lyric has a scorecard that determines how much it relates to each of the moods/themes, and each video effect combination has the same scorecard.
What’s better is that we can think of these scorecards as vectors, and do some linear algebra to determine how “similar” they are!</p>

<h3 id="combining-preferences-with-the-default-mood">Combining preferences with the default mood</h3>
<p><strong>“Combining” user preferences with the current lyric</strong></p>

<p><strong>Some things to consider:</strong></p>

<ul>
  <li>When a user selects a lyric, they’re suggesting that they like a particular theme, not that they dislike any other theme.</li>
  <li>When a user suggests that they like a particular theme, the video should be pushed in that direction, not completely changed to only show effects related to that theme. If theme A is being selected, but theme A is not at all present at this point in the song, theme A video effects shouldn’t be shown.</li>
  <li>When the user doesn’t click on anything at all, the video should reflect the themes present in the video with no bias towards any one theme.</li>
</ul>

<p><strong>So, here’s a solution:</strong></p>

<p>As the users select lyric chunks, we need to somehow aggregate all of these lyrics scores to generate a “user preference card.”
Once we have that user preference scorecard, we combine it with the baseline lyric card for the current index (remember, each word index has its <em>own</em> theme card)!</p>

<p>As the video plays, the mood score of the current lyric’s index is <em>combined</em> with the user preference card! This way, user preferences “push” the effects towards their desired mood/theme preference, without completely hijacking the mood that the current lyrics represent!</p>

<p>Remember that the selected themes should only indicate what the user <em>is</em> interested in, not what the user <em>isn’t</em> interested in. So, to generate the user preference theme card from the selected lyrics, we only average the two arrays when the resulting index is greater than the original baseline lyric index. This makes it so that the user’s lack of interest in a theme won’t lower the presence of that theme in the final card.</p>

<p>In other words, we have this “baseline score,” the theme scores we generated for each lyric index. As the video plays, we only want the users to be able to <em>boost</em> a particular theme. We never want to <em>reduce</em> a particular theme from being represented. Therefore, as the video plays and we need to combine the baseline current index score with the user preference card, we only combine the two scores if it would <em>increase</em> the theme. We’ll never <em>decrease</em> the baseline lyric index score.</p>

<p>Therefore, if the user never selects anything, the final scorecard is simply the baseline score from the current lyric index.</p>

<h3 id="matching-to-an-effect">Matching to an effect</h3>
<p>We now have a scorecard, a list of 10 numbers that represent the current theme/mood taking into account both the user preferences and the mood/theme of the current phrase.
What’s left is to find the <em>video effect</em> that best matches this scorecard!</p>

<p>We already scored all of the video effect combinations against the themes used in the user preference card!</p>

<p>I like to imagine that there’s this 10-dimensional “mood space,” where each dimension represents a particular mood. We’ve plotted every possible video effect combination in this 10-d mood space, and we also have a point representing the user’s score card! All you have to do is find the video effect combination that’s closest to the user’s score card! This is why thinking of the mood scores as <em>mood vectors</em> is helpful. We just take the Euclidian distance between the user score card and each effect pair, and return whatever effect combination that’s the closest!</p>

<p>This effect combination (background plat, color tint, and video effect) is then applied to the video, and voila! We’re done!</p>]]></content><author><name>George Ingebretsen</name></author><category term="Other" /><summary type="html"><![CDATA[Last summer I had the chance to intern at Pixel Lab, working on a project with Microsoft Azure and Little Simz! Check out the website here, and some promotional content here!]]></summary></entry><entry><title type="html">Computer Apps I Recommend</title><link href="https://georgeing.me/2024/03/12/Computer-apps-I-recommend.html" rel="alternate" type="text/html" title="Computer Apps I Recommend" /><published>2024-03-12T00:00:00+00:00</published><updated>2024-03-12T00:00:00+00:00</updated><id>https://georgeing.me/2024/03/12/Computer-apps-I-recommend</id><content type="html" xml:base="https://georgeing.me/2024/03/12/Computer-apps-I-recommend.html"><![CDATA[<p>Some tools I’ve found to be really helpful!</p>

<p><em>(For reference — I’m a mac user and a university student, and can afford to pay for a few subscriptions)</em></p>

<ul>
  <li>
    <p><strong>Speechify</strong></p>

    <p>Just plop any websites you want to read into their link clipper and it’ll save an audio recording on your phone. I have around 45 min every day where I’m walking on campus, commuting, cooking, doing chores, etc. where I listen to stuff. This works as a sort of reading list, but one that I actually keep up on and gets added to roughly as quickly as it gets read! (it doesn’t usually build up)</p>
  </li>
  <li>
    <p><strong>Text sniper</strong></p>

    <p>I find this to be extra helpful lately because it makes it super easy to copy anything into an LLM prompt! A surprising amount of web text / pdf text isn’t easy to highlight and copy.</p>
  </li>
  <li>
    <p><strong>Fantastical</strong></p>

    <p>I’m considering moving to notion calendar, but other than that this is the best integration of a reminders app + calendar app that I’ve seen. It lets you drag reminders onto the calendar which is super awesome if you want to specify a time when you’re gonna work on a task and view it as part of your calendar events.</p>
  </li>
  <li>
    <p><strong>Anki</strong></p>

    <p>I wish I found out about spaced repetition sooner! Especially if you’re in school and you actually care about remembering anything you learn in 5 years that you don’t plan on actively doing every day. If you’re gonna spend 4 years in university learning random stuff you may as well remember it.
  Not to mention, if you hear a nice story about your grandpa, just add a card for “what’s a good story about your grandpa?” and you’re guaranteed to remember it for as long as you’re using Anki. This is like a huge superpower imo. Cost: like 10 min / day, Benefit: the ability to remember <em>anything</em> you care about, forever!</p>
  </li>
  <li>
    <p><strong>F.lux</strong></p>

    <p>This one is really great! It syncs the blue light coming out of your screen with the sun cycle at your location. There are a ton of reasons why getting blasted with blue light is bad  for you, but it’s also just a much more pleasant experience when your computer syncs with the sun cycles like this.</p>
  </li>
  <li>
    <p><strong>Dark Mode wallpaper</strong></p>

    <p>I like using a solid grey-white background during the day and solid black background when the sun goes down. This is a great free option for setting different wallpapers that trigger when the sun goes up / down.</p>
  </li>
  <li>
    <p><strong>Fatebook chrome extension</strong></p>

    <p>Press command+shift+T to make a prediction! Get calibrated!</p>
  </li>
  <li>
    <p><strong>Inbox when ready / Ochi</strong></p>

    <p>I noticed that a lot of the time, either when it’s time to switch my attention to whatever I’m working on next, or just get bored with my current task I’d catch myself impulsively pressing the key strokes: new tab, “gmai-“ without even thinking about it. Interestingly, if I remembered some embarrassing interaction or the midterm I just did bad on, I’d do the same. If it wasn’t gmail, it would be the built-in mail app, instagram, or twitter.
  If you’re like me and impulsively check your twitter / instagram you should get something like “SelfControl,” “Freedom,” “Ochi,” or one of the other apps that do this. I found Ochi to be the best for me because it’s cheaper than Freedom and has an option to set up times of the day where apps / websites are blocked. SelfControl seems cool too and it’s free.
  If your main issue is gmail, you should totally get “Inbox when ready.” It gives you a quick prompt that makes sure you really need to open the app and I found this to be super helpful.</p>
  </li>
  <li>
    <p><strong>HiddenBar</strong></p>

    <p>Pretty essential if you’re a mac user imo. A lot of mac apps come with menu bar items that can’t be removed. Mac doesn’t have a built-in way to hide menu bar items so I use this to fight the clutter.</p>
  </li>
  <li>
    <p><strong>Magnet</strong></p>

    <p>Also pretty essential if you’re a mac user imo. Lets you drag a window to the top of your screen to fully enlarge it and left / right for split-screen without having to “Enter Full Screen” or “Tile Window to Left/Right of Screen”.</p>
  </li>
  <li>
    <p><strong>Arc</strong></p>

    <p>Pretty cool! I dunno. I’m not gonna push it too hard but I generally like arc.</p>
  </li>
  <li>
    <p><strong>LLMs</strong></p>

    <p>If you were told 5 years ago that you can pay $20 a month and get access to a website that’ll literally answer graduate-level math / CS with a single prompt, I’m sure you’d pay like 4x the price. I pay for gpt, Claude, and Cursor.</p>
  </li>
</ul>

<p><em>Citing my sources: Thanks for all the reccomendations Dad :)</em></p>]]></content><author><name>George Ingebretsen</name></author><category term="Fun" /><summary type="html"><![CDATA[Some tools I’ve found to be really helpful!]]></summary></entry></feed>