Where to begin... In the past month, we’ve seen an explosion of AI announcements. AI is getting ever more powerful, and now it’s being deployed in situations where it can actually be useful to enhance our productivity (Microsoft Office, Google Workspace, etc). But with that also comes a lot of uncertainty and questions like: “Will AI replace our jobs?”. As of right now, that seems unlikely, but as models get ever more complex, who knows! It might be inevitable (and when it happens, I want to be on a good page with our AI overlords).
Anyway, get yourself comfortable, because this is a long newsletter edition. As always, I’m happy to get your feedback on this. Reply to this email and let me know how I can improve this newsletter.
Enjoy the weekend
👨🏫 Simply Explained
Last week I started posting short form content for those with the attention span of a goldfish. Aside from YouTube, you can now also follow me on:
🤓 Cool Stuff I Found on the Internet
Scientists worked 12 years to fully map the brain of a baby fruit fly. It was a slow process that requires them to slice the brain in thousands of samples, image them with an electron microscope, and then piece the 2D images together in a 3D model. This video shows the process. They chose fruit flies because they have a small brain containing 3016 neurons with over half a million connections between them. For comparison, the human brain has 86 billion neurons and even more connections. The scientists hope that their 3D brain model could help us build more efficient neural networks in computers (which are actually very inefficient compared to actual brains).
Elon Musk promised it, and now the algorithm behind the Twitter timeline has been open sourced. The system takes tweets and ranks them using a machine learning algorithm. It then filters out people you’ve blocked and tweets you’ve seen before sending the tweets to you. The algorithm optimizes for positive engagement, and prioritizes tweets you’re likely to interact with (it also boosts tweets from Musk himself 🤨). Open sourcing the code is only the first step. Twitter now needs to show its willing to accept changes/proposals from the community and incorporate them into their algorithm.
61 companies in the UK trialed a 4-day workweek for 6 months and the results are interesting. Employees reported they were happier, less stressed, slept better, and had more time for their families. Company revenue stayed the same while resignations decreased. 91% of companies are planning to keep the 4-day workweek. But take these results with a grain of salt: most companies in the study were small (<25 employees), and we don't know if company revenue would remain unaffected in the long-run.
The James Webb Telescope is so sensitive it detected a sand storm on a planet 40 light years away. That’s over 370 trillion kilometers! Planet VHS 1256b is very large (12-18 times larger than Jupiter) and its sand clouds can reach temperatures of 815°C. The telescope also analyzed the atmosphere and found traces of water, methane, CO and CO2. Amazing to think we can “see” weather patterns on faraway planets, although I hope the alien weather reports are a bit more accurate than here on Earth 🙃.
The Japanese Hayabusa2 mission took samples from an asteroid and flew them back to Earth in December 2020. Analysis is still ongoing, but now scientists found they contain organic compounds like uracil (a building block of RNA) and Vitamine B3 or niacin (needed for metabolism). This adds more credibility to the theory that the building blocks for life may have originated in space and delivered to Earth billions of years ago by meteorites. The next question is: how common are these molecules in asteroids? Well, we might find out soon enough. In September, the Bennu spacecraft (NASA) will land back on Earth with samples of another asteroid.
Drinking in space is hard. Usually, beverages are served in Capri Sun-style pouches to prevent spilling. Not anymore. NASA now has a space cup. It can contain beverages in Zero-G environments, even as it flips over! Here's a cool video demonstration. The cup exploits the effects of surface tension and capillary forces (water wants to stick to itself). It was co-invented by astronaut Don Pettit. You can buy an actual, flight-certified cup from Spaceware for $650, although you need to go to space to get the most out of your cup 🤪.
Relativity Space's Terran 1 rocket is world's first rocket that is almost entirely 3D printed (85%). It uses liquid methane and oxygen as fuel, making it ideal for Mars missions. The company's first mission was nearly successful. The rocket survived Max-Q, a point in the flight where structural loads on the rocket are the highest. They even had successful stage separation, but then the rocket failed. The exact cause is still being investigated, but the company was thrilled with how its vehicle performed. They already have $1.65 billion dollars worth of contracts, and are working on a more powerful and re-usable rocket.
🧠🤖 Artificial intelligence
By now, you’ve heard how amazing ChatGPT is, but it’s held back by its training set, which only has data until September 2021. ChatGPT knows nothing about recent events. Until now! ChatGPT can be expanded with plugins that allow it to talk to other services. You can now ask it to book a table at your favourite restaurant (OpenTables), order from local stores (Instacart) or even ask it questions about your own files, notes, emails, and so forth (Retrieval plugin).
OpenAI’s latest language model is now multi-model: it accepts text and image as input. So no you can ask it to explain what’s funny about a cartoon or give it a napkin drawing of your new website and ask it to write the code for it. OpenAI notes that GPT-4 “exhibits human-level performance” on various benchmarks. It passes the bar exam with a score around the top 10% of test takers, whereas ChatGPT is in the bottom 10%. The model is also more “steerable”, allowing you to define how the model should respond. For instance, you can ask it to act like a tutor and never give the answer directly to a student, but to ask the right questions to help them think for themselves. It also has some limitations: it doesn’t learn from its experiences, it can still lie and make things up, and it’s trained with data until September 2021, so it knows nothing about recent events.
In an open letter, the Future of Life Institute is asking AI companies to stop developing AI models more powerful than GPT-4. The letter is signed by Elon Musk, Steve Wozniak, Andrew Yang, and over 1000 other people. They're afraid of the impact AI can have on our society and questions like: "Should we let machines flood our information channels with propaganda and untruths" or "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?". That's pretty dark! However, it's also not unfounded, given that nobody understand how AI works and why neural networks work as good as they do.
Microsoft is doubling down on AI and is integrating it into nearly every Microsoft 365 app. The company introduced Copilot, which uses the Microsoft Graph to combine data from different apps. Soon you’ll be able to draft a PowerPoint presentation based on specifications in a Word document. Or ask Copilot to take meeting notes inside Microsoft Teams. Or even ask it for help when creating Pivot Tables in Excel. Microsoft was quick to point out that Copilot can make mistakes and human checks are required. Google announces similar features for Docs, Sheets and Slides.
Paul Graham - computer scientist and venture capitalist - makes an important point about AI: it needs to be trained on data that people create. So by that reasoning, we can't all use AI, because then there'd be no data to train future models on. In a subsequent tweet, Graham notes that people who don't use AI will gain more influence as they will be the only "organic writers" who generate all the data to train AI, which is then used by everyone else.
Research by Stanford University revealed that AI code generation tools produce insecure code. They ran 89 tests against GitHub’s Copilot feature, and in 40% of cases, Copilot produced code containing exploitable vulnerabilities! The conclusion of the research is to be careful when using these tools. Use them to generate boilerplate code, but verify the AI’s work before using it.