Jason Michael Perry — Page 4 of 11 — Thoughts on Tech & Things

Latest Thoughts

  1. 🧠 Will Battery Tech Ease Range Anxiety?

    One silver lining for future EV owners is the rapid advancement in battery technology. I own a Tesla Model 3 with a lower range, and for my daily commuting and city errands, it provides more than enough charge.

    Where range anxiety kicks in is on longer trips. Sadly, as noted in my earlier post, America’s charging infrastructure is still not quite there. But imagine this: future EVs with these new batteries could boost ranges to 600-900 miles. You could drive from Charlotte, NC, to New Orleans (about 700 miles) without needing to recharge.

    Of course, these long-range capabilities will be available in premium models, but as technology progresses, each new generation of EVs will benefit from these advancements. This battery trickle-down effect ensures continual improvements across the market – and over the next decade battery tech could kill range anxiety.

  2. 🧠 Guess what? EV Charging Still Sucks

    I keep hoping for better news, but Tesla’s Supercharger network remains the gold standard, which is concerning considering their recent decision to lay off the entire Supercharger division.

    Earlier this year, I rented an EV in New Orleans and had a frustrating experience. The nearest charger was at least 20 minutes away, and none of the downtown hotels seemed to offer charging. Even in areas with great charging infrastructure, the inability to charge at your destination can be a significant inconvenience, and it’s especially challenging in rural and southern regions, where options are even scarcer.

    The move to EV is clearly a transition, not an overnight revolution. However, the persistent narrative of unreliable charging networks stokes fear and hesitation with potential buyers. Sometimes it feels like sabotage that no other company can provide a reliable, large-scale charging network at par with Tesla.

  3. 🧠 Is CrowdStrike’s Mishap A Blow for Test Driven Development Enthusiasts?

    TDD, or Test Driven Development, is a coding practice where tests are written before the actual code. It’s designed to validate functionality and improve code quality. However, a deep dive into the recent CrowdStrike incident reveals potential weaknesses in this approach.

    A critical error was outlined in their analysis:

    “The new IPC Template Type defined 21 input parameter fields, but the integration code that invoked the Content Interpreter with Channel File 291’s Template Instances supplied only 20 input values to match against.”

    Also, on July 19, 2024:

    “Two additional IPC Template Instances were deployed, introducing a non-wildcard matching criterion for the 21st input parameter. These new Template Instances resulted in a version of Channel File 291 that required the sensor to inspect the 21st input parameter—a condition not met by previous versions.”

    Essentially, the test expected 21 parameters, and always filled in the 21st parameter, even if one was not provided. This allowed the test to pass, but when only 20 were delivered, the app failed leading to a null reference for the missing 21st parameter. This highlights a oversight in the testing process where the test did not accommodate real-world application changes.

    While TDD and unit testing are valuable, they are not foolproof. I find development teams often write poorly designed tests sometimes with the goal of meeting a quote on code coverage, or with the idea that they can update the test later. This incident serves as a reminder that quality in testing should never be sacrificed for coverage metrics.

    Do you employ TDD in your projects? What has been your experience with balancing test quality and coverage?’

    https://www.crowdstrike.com/wp-content/uploads/2024/08/Channel-File-291-Incident-Root-Cause-Analysis-08.06.2024.pdf

  4. 🧠 Is Meta Building A Celebrity Voice Assistant?

    Meta is reportedly in discussions with several celebrities about licensing their voices for upcoming AI projects. The buzz is that they’re aiming to launch a voice assistant like Siri or Google Assistant, but allow users to choose from a list of celebrity’s voices.

    This concept isn’t entirely new. Remember Waze and its popular feature allowing users to select celebrity voices for navigation? However, it raises questions about the potential restrictions such licensing deals might impose on the assistant’s dialogue capabilities. Could celebrity-endorsed voices limit what your AI can say?

  5. 🧠 Microsoft lists OpenAI as a competitor

    In a shift, Microsoft has officially listed OpenAI as a competitor in its latest SEC filing. As the two giants roll out increasingly overlapping products—from ChatGPT versus CoPilot to Bing versus SearchGPT—the dynamics of their relationship seem to be evolving. The addition of a new CEO to Microsoft’s AI division only adds to it.

    This might also be seen as a strategic move to placate regulatory concerns from the FTC, which has expressed concerns about the closeness of their partnership. It begs the question: are Microsoft and OpenAI partners, competitors, or perhaps something in between—frenemies?

  6. 🧠 OpenAI Unveils Structured Outputs

    OpenAI has introduced a groundbreaking feature that lets developers define a schema to receive responses from ChatGPT models using that schema. This enhancement moves beyond the traditional text responses, opening up dynamic ways to use the information across various systems.

    Previously, prompt responses were limited to text answers, which restricted their application in feeding other systems. Now, you can define a schema—for instance, a data structure for a recipe—requiring the model to break down each step into an array or list and organize ingredients similarly.

    This structured approach simplifies integrating the data directly into databases or crafting interfaces and formats for the response data, making developers’ lives easier and unlocking exciting new possibilities.

    I’m looking forward to testing out these new structured outputs!

  7. 🧠 Is Suno a Modern-Day Napster Saga?

    It feels like the Napster days are back as the RIAA gears up to sue AI music companies Udio and Suno. Interestingly, these companies have countered the allegations by claiming fair use of copyrighted data.

    “We train our models on medium- and high-quality music found on the open internet
 Much of this indeed contains copyrighted materials, some of which are owned by major record labels,” says Shulman. He firmly believes that “Learning is not infringing. It never has been, and it is not now.”

    The RIAA has substantial reasons to pursue this case, especially as signs emerge of a slowdown in music streaming (https://www.bloomberg.com/news/newsletters/2024-08-04/is-the-music-industry-slowdown-a-crisis-or-a-blip), with some analysts suggesting the industry may have peaked. This could prompt the organization to aggressively seek new revenue streams through litigation.

  8. 🧠 AI Companies are Forcing Websites to Play Whack-a-Bot

    404 Media has an insightful piece on the complexities of correctly blocking bots—a topic perfectly aligned with my recent newsletter on AI and the robots.txt file.

    Anthropic, the creators of Claude, are actively indexing content on the public web. However, the names of the bots and crawlers they employ seem to be in flux or are changing frequently. This makes it difficult, if not impossible, to tell AI tools not to consume your content.

    I believe this isn’t necessarily a nefarious action, particularly from a company that emphasizes making AI safe. However, this constant name-changing makes it challenging to ensure you’re blocking the intended bots.

    In an ideal scenario, robots.txt would allow for a whitelist approach, enabling us to specify who can access our content and compelling companies to maintain consistent bot names. Alternatively, it might be time to adopt Reddit’s approach and block everything, sending both search engines and AI bots a 404 error page.

  9. 🧠 The First Taste of Apple Intelligence is Here

    Apple has forked its betas, with iOS and iPadOS 18 releases continuing, likely targeting a general release in September with brand new iPhones.

    Developers on Macs, iPhones, and iPads that meet the Apple Intelligence requirements (Apple Intelligence Requirements) can now download iOS and iPadOS 18.1, as well as macOS 15.1, giving a short peek at some of the early features.

    The current suggestion is that Apple Intelligence features will ship in October. I still expect we will see some new features not announced at WWDC specific to the new round of devices.

    I have the features enabled on my laptop, but sadly I’m one generation behind on my phone, so I’ll have to wait and see. For now, 9to5Mac has a rundown of the features available in the beta.

  10. đŸ§‘đŸŸâ€đŸ’» Having Problems with Drupal, Lando, and Pantheon?

    I just spent forever trying to pull a new Drupal 10 website repo from Pantheon and get it running locally with Lando. Brutal. I’m documenting my path in hopes it helps some person some day, which started with this Reddit post: Problem getting Lando set up with Pantheon.

    Steps to Get Your Drupal Site Running Locally with Lando and Pantheon:

    1. Clone the Site from Pantheon:
      git clone <pantheon-repo-url>
      This command clones the repository from Pantheon to your local machine.
    2. Initialize Lando:
      lando init
      • Do not select Pantheon as your environment.
      • Select the working path where you pulled the site.
      • For the second option, select Pantheon.
    3. Start Lando:
      lando start
      This command starts the Lando environment for your site.
    4. Pull the Database and Files:
      lando pull
      This command pulls the database and files from Pantheon. (This step WILL fail due to database access issues.)
    5. Destroy the Current Environment:
      lando destroy
      This command destroys the current Lando environment, which can help resolve issues with database access.
    6. Rebuild the Environment:
      lando start
      This command starts the Lando environment again, which should rebuild the database correctly.
    7. Pull the Database and Files Again:
      lando pull
      This command pulls the database and files from Pantheon again, which should now be successful.
    8. Clear All Caches:
      lando drush cr
      This command clears all caches, which can help resolve errors and stability issues. If you haven’t you may need to install Drush first with this command
      lando composer require drush/drush

    Now, if you’re lucky, the gods have smiled upon you and you have a running site. Hoping to see this updated in the Pantheon docs or the Lando recipe updated sooner than later.

  11. 🧠 Zuckerberg’s Thoughts on Open Source AI

    Meta’s release of the largest open-source AI model, Llama 3.1, with 405 billion parameters, continues to position the company as a leader in open AI. This move is helping to build a growing ecosystem around these models that’s simply not possible with commercial models from OpenAI, Claude, and others.

    I have found the freedom of running the smaller Llama 2 model on my local environment to be liberating, providing a low-stakes way to experiment with new data approaches. The substantial investment in building these models is sure to pay off.

    Of course, Llama is not the only open-source AI model available. Hugging Face is filled with AI models of all sorts, and the French AI developer Mistral continues to release more powerful models. Notably, Mistral has released some of its newer models under non-production or research licenses that forbid commercial use and deployment (Mistral License).

    Meta also requires a license, which is thankfully simple and easy to read. As you can see (Meta Llama 3 License), it offers flexibility but requires applications using it to state “Built with Meta Llama 3” and to prefix model names with “llama” before sharing them. Open-source purists are sure to disagree with these stipulations, seeing them as a departure from the true spirit of open-source freedom.

    This may be a small price to pay for access to frontier-level models, but I wish more companies would embrace the idea of exploration and open source, providing last-generation releases to the developer community.

    Zuckerberg’s blog post on Meta’s place in AI is a worthwhile read. I plan to dive into it in a future newsletter, so subscribe and stay tuned.

  12. 🧠 OpenAI Just Released Search!

    I’m surprised it took so long. After all, OpenAI’s ChatGPT powers Microsoft’s Bing search, so in some ways, the company has been in the search game from nearly the start.

    What’s interesting is that OpenAI’s approach is less like Bing and Google’s AI Overviews and more like Perplexity AI—my favorite new search tool in years. This is a good thing, changing our relationship with search from a list of results that may hold the answer to our questions, to actual responses that you can drill into with additional questions.

    For access you need to join a waitlist, and I’m on it, so I can’t kick the tires just yet. OpenAI expects to integrate search into ChatGPT in the long term rather than maintaining them as separate products.

    This means the competition in search is heating up for Google—and so far, their attempts to add AI to search have been lacking.

  13. 🧠 Why Google is no longer limiting third party cookies in Chrome

    The beauty of digital marketing lies in its ability to be highly targeted. If you run ads on social media or through Google’s vast ad platform, you know how amazing it is to target based on criteria like gender, education level, income, and location.

    These targeted ads perform better than untargeted ones, allowing you to reach customers at lower acquisition costs. However, this requires platforms to gather as much information about you as possible and track your activity across the internet and apps to send you these highly targeted ads.

    Several years ago, Apple introduced features to its platforms that prevented this type of tracking through its apps and browser as part of its privacy stance. This move was quickly followed by other browsers like Firefox. Google also promised to do the same but has repeatedly delayed implementing this change (Google halts its 4-plus-year plan to turn off tracking cookies by default in Chrome).

    So why did Google get cold feet? In 2022, Meta (then Facebook) saw a $250 billion drop in valuation due to Apple’s new rules (How Apple helped create Facebook’s $250 billion ad crisis). Despite Google’s size, a significant portion of its revenue comes from ads. Blocking third-party cookies in the world’s most-used web browser would have a seismic impact on its business. Google’s hope was to block third-party cookies in Chrome while replacing them with a worse tracking method called FloC (Google’s FLoC Is a Terrible Idea), but as that path became unattainable, it decided to backtrack on its previous plan.

    This leaves us with a new ad campaign launched by Apple, reminding folks not to use Chrome unless they want their browser to track, watch, and report everything they do. 

    So, what browser are you using?

  14. 🧠 How Will Global Copyright Rules Shape AI Development?

    I’ve often wondered how differing copyright rules and protections across the globe might impact AI. The Financial Times has an interesting write-up on how Japan is skirting international copyright rules to become a leading destination for AI startups.

    This is an interesting counter to the EU, which has been a regulation hawk on AI, to the extent that both Apple and Meta have decided to delay the release of key AI features in their member countries.

    In a world where, as Nvidia’s CEO Jensen Huang aptly put it: “While some worry that AI may take their jobs, someone who’s expert with AI will,” this sentiment applies equally to businesses. Those businesses that learn to harness AI will create a competitive advantage that could make the difference between thriving and merely surviving in the coming years.

  15. 🧠 CrowdStrike Reminds Us That Dependency Management is a Major Attack Vector

    I’m sure you’re aware of one of the biggest computer shutdowns ever, which has grounded over 2,000 flights worldwide, shuttered hospitals, retail stores, and who knows what else. All because of a faulty automatic update from CrowdStrike—software ironically meant to protect companies—that’s used by a majority of Fortune 500s.

    My take is that this issue has been brewing for some time—automatic updates and increasing failures in dependency management.

    For the non-developers in the room, most modern software is built on tons of dependencies from a combination of open-source and closed-source repositories. When you set up a new project or do an update, a dependency management tool downloads the code from an external source so your application can use it.

    Our software has tons of dependencies, much of this to allow us, as developers, to avoid rebuilding logic that’s already been built. Think database connections, complex math operations, image editing tools, charts and more.

    In 2016, a developer of a package used by tons of software rage-quit over issues with his package’s name, breaking the Internet and bringing down tons of applications around the world.

    This has become an increasingly common attack vector where squatters buy up abandoned packages only to add code that can be used as a back door, or worse, hackers become contributors to popular packages with the goal of injecting malicious code into applications.

    While all eyes point to a mistake by CrowdStrike, it serves as a reminder that our software has become so complex that auto-updates of dependencies, like security software or operating system updates on production platforms without testing, remain a huge door for vulnerabilities.

    IT managers should not let software publish to mission-critical systems, even from their most trusted vendors, without testing and ensuring a ready rollback procedure to get systems quickly up in the case of a failure.

  16. 🧠 OpenAI Introduces ChatGPT 4o Mini

    Not a ton of detail yet, but interesting to see OpenAI push into the realm of Small Language Models—especially with a multimodal model that can easily understand audio, video, images, and text.

    The round-trip latency cost of calling services on the Internet can make or break AI-powered hardware—just look at the Humane pin.

    Powerful small models that run locally can open the door to super-fast response times that happen locally, and with fewer privacy concerns.

    Of course, details are super slim, but if OpenAI plans to license ChatGPT 4o Mini to hardware makers it could open the door to tons of exciting new products that actually work.

  17. 🧠 Is Meta’s Multi-Token Prediction Model A Game-Changer?

    Meta just released a multi-token prediction model that could cut down inference, the process where a model responds to a prompt, speed by 3x.

    Existing LLMs work like autocomplete, predicting the next token or word in a sequence. This novel approach looks to predict 2-4 words in a sequence all at once, allowing for faster response times.

    Meta released the model under a research license using Hugging Face, continuing to solidify its place as the open-source AI leader. I keep saying it, but who would have thought Meta would be blazing new paths?