Jason Michael Perry, Author at Jason Michael Perry — Page 2 of 2
  1. Secure Software

    Reading about Anker’s recent security issues has been interesting. In reading I came across this great comment on The Verge’s article :

    “why did this happen at all when Anker said these cameras were exclusively local and end-to-end encrypted?” and “why did it first lie and then delete those promises when we asked those questions?”
    Occam’s razor.

    As a software developer, I can tell you with about 95% certainty what happened. The Anker software team screwed up and didn’t know about this security hole. They didn’t test this scenario. They just didn’t know. They probably don’t have enough security engineers and checks. It’s probably not a huge company.

    As for the lies, the Anker PR/marketing people you talked to have no clue. They are probably just fed information from whoever in the company. They probably didn’t “lie”. Maybe the engineers were still investigating and weren’t sure so they told them that “no chance there’s a security hole”. Maybe a dev manager wanted to cover his/her ass and said, “there’s no way we didn’t test this”. Whatever the case, there’s a gap between reality (i.e. source code) and how the product that the marketing team is responsible for selling (welcome to software development!).

    So yes… it’s fun to think of conspiracy theories like the Chinese government ordering Anker to leave a backdoor so that it could keep an eye on the front porch of Americans… but Occam’s razor chalks this up as careless software development and unresponsive marketing/PR(likely both a result of being a small’ish company).

    This. Yes, this right here is in my own personal belief the true reality of the situation.

    Mindgrub is not a huge company, but we spend a lot of time focused on the processes we need to create secure and scalable applications. We manage to do this because we are an engineering team of scale, and that requires us to set rules from branching strategy to mature continuous integration policies that our engineers can embrace as they move from project to project.

    These processes are pretty good but nowhere near perfect, and I can tell you that the way we build applications is light years beyond many organizations I have worked with in the past.

    Why? Because many best practices collapse when not run at scale. A singular developer can not peer review his or her own code. When you take any 2-4 person internal development shop 95% of the time you find cowboy coding happening on a regular basis. As all humans, we all make mistakes regardless of how amazing we may be as a singular developer. I can’t begin to tell you how often under a basic audit of code, infrastructure, or process that it becomes immediately obvious that this approach has created a technical debt of huge magnitudes.

    What is more common in almost every one of these situations is a rift between the appointed chief engineer(s) and other teams like marketing and sales. Terms you may hear are this is what the customer really wants, we had to build it this way, or if you only understood how it worked.

    Keep an eye out friends.

  2. What Security?

    Every year all Mindgrub employees are required to complete our annual security training. This year we switched it up and moved to the well-received KnowBe4 training curriculum.

    Watching and completing the ~45 min eLearning session seemed a bit surreal this holiday season. After all, LastPass completely failed, a house representative-elect lied about everything, and Anker was caught lying about its local-only cameras actually connecting to the cloud. All this without mentioning the many issues still circulating FTX being hacked and its founder running a billion-dollar company with little to know processes in place.

    It really makes you stop and realize how hard it increasingly is to keep yourself safe. It’s one thing when we know we need to protect ourselves from those we might label as unsavory, but it becomes much more difficult to protect ourselves from the entities that we expect to protect us.

    When I arrived at Mindgrub we made heavy use of LastPass. While we liked the tool, we found it lacked certain enterprise features we wanted and migrated to a different enterprise password manager. That tool is the password manager that, combined with our security processes, helps us limit access to only those who need it while also preventing team members from sharing passwords as text in tools like Slack or email.

    Having a tool like LastPass hacked to a point that so many are at the mercy of a master password that now is a gatekeeper that hopefully can survive brute force attacks is a pill that is difficult to swallow. LastPass’s customers did everything right and trusted a company whose charter is securing your data better than your own.

    The thing is, LastPass is just the most recent of these types of companies to let us down. Y’all remember Equifax, YouTube, Facebook, Marriott, Verizon, …? What is crazy is this is the list we know, and having spent decades working with security specialists, I can absolutely promise you that a very small percentage of companies ever publicly report most security incidents.

    What we are facing is the reality that security is a team sport, and heck, maybe a village or country-wide sport. You or I can do everything correctly, however, as has been the case our entire lives, we all have dependencies on people, products, businesses, or governments, and we are all susceptible to the weakest link in this list. Just one chink in our combined armor, and the impacts are tremendous.

    So consider this a reminder for all of us to keep being serious about the importance of security in our lives. Be diligent and make sure that we hold our IT and development teams to the security standards we expect of ourselves. Are you a developer? Find a security framework and make sure you and your team follow it.

  3. Migrating

    My friend recently made the big shift from Android to the iPhone. As I celebrated her finally joining the blue text messages club and her ability to join family albums, use share play, and so many other iOS features, we had to start with the very first step. Migration.

    I’m not the average Apple customer. As a developer and tech enthusiast, I read about features before they get released, tune in to every announcement, and regularly run beta versions of iOS. I typically keep a Google Pixel phone around to keep up on Android trends and test applications we build for our customers, but I have never used Apple’s migration application, so I must admit that I was giddy with excitement to see just how well it truly works.

    The process could not have been simpler. The new iPhone presented us with a QR code we scanned with the Android phone to download the Move to iOS app from the Google Play store. Once downloaded, it asked us for a one-time code that appeared on the iPhone. After that, we received a few warnings about the access the move to iOS app requested but other than that, it was a click and wait. The download began at nearly 3 hours but, in reality, took closer to 30 minutes. Once migrated, we chucked the Android phone in a corner (joking!!) and completed the iOS setup process. If you have upgraded iOS on an iPhone you know the standard options like setting up Siri and Face ID.

    The migration was pretty remarkable in how deeply it copied everything over. It moved over messages, loaded contacts, preconfigured email, calendar, photos, pre-downloaded all of her apps, and even managed to maintain the ring tones and text sounds from her Android phone. She was literally able to pick up the phone and begin using it without skipping a beat.

    Once the phone was up and running, I sat, hoping to see how she experienced the dynamics of a much cleaner and easier-to-use phone.
    I tried my best to avoid pressing buttons and jumping in – but I did way too much of that. But what I witnessed surprised me – decades of software updates had made many features that felt normal to me unwelcoming or complex. So much of what I wondered to be intuitive really came from my years and years of using iOS devices.

    Features like swiping to search or control center become things that imagine many users may never really discover. I notice this same dynamic often at Mindgrub when new employees moved from Windows to macOS for the first time. I often take screenshots or use Quick Look to view previews while searching for a particular file.

    Many of these things are both easy and so so hard. As iOS and macOS have grown layers and layers have caked on top of each other, and the layers have made us forget that things are no longer as easy or intuitive as they once were.

    Migrating was a fascinating reminder to stop and think of the long journey all software takes and remind ourselves of the difference between common knowledge or common intuitiveness and learned intuitiveness.

  4. It Wasn’t Me

    Arstechnica has an amazing piece on the dystopian possibilities of AI images and deep fakes. As the article notes – deepfakes have been a reality for years, but AI takes what was a skill and makes the process so simple that anyone can channel its uses for good (or primarily bad). It opens up a world where the art of disruption is limited to our ability to capture a picture in our imagination and transcribe it as words.

    I can remember joking at how insane a song like Shaggy’s It Wasn’t me, when in reality, maybe Shaggy was a sage describing our pending deepfake future. For those thinking the computers won’t fool me try out this quiz. Times are early, and fake images are becoming harder and harder to pick out.

  5. Google Analytics 4

    Google Analytics 4, or GA4, is a reimagining of the analytics platform for a world that is post page views. Many of us still think of web traffic in older terms like hits, pages, and sessions – however, the mobile space and rich internet applications have changed how the web works.

    Imagine Instagram and think of a typical interaction. The infinite scroll removes pagination and page views, exchanges can be tracked by the length you pause over an image or clip, and taps – while they happen – are not the core way most users interact.

    Even the concept of user sessions can seem weird when we track a popular site like Reddit, where users visit multiple times a day. All of these changes have forced us to step back and rethink how we should analyze our traffic data – and, better yet, overlap streams of data to compare websites to others, such as mobile traffic. Some Reddit users may move from the web when using a desktop to the mobile application on the run.

    GA4, for the first time, allows Google analytics to pull in data streams from multiple sources and generate reports across many distinct platforms. Do you offer an e-commerce experience over the web, mobile, and tablet? We can create data streams to pull data from each source, offering a view of engagement that is more holistic than before. Goal funnels that allowed you to see abandoned cart rates now can let you compare those funnels across all platforms and see trends.

    So how does this all work?

    At the core of GA4 is a new focus on data streams, events, insights, and reports, all combined with a querying system that embraces the ideas of reporting platforms like PowerBI or Tableau.

    Data Streams

    In the olden days, each Google analytics account represented a website. GA4 has chosen to embrace multiple data sources and allow those data sources to live across platforms. This is excellent for a company with various brands like Gap. In this new world Gap, Old Navy and Banana Republic could overlay data streams to better understand KPIs and measure trends across sibling sites.

    As I described above, mixing mediums provides even more power. For folks like many of our clients with mobile and web experiences, analytics have lived in two different and distinct platforms: firebase and Google analytics. Firebase provides analytics targeted to a mobile experience. As such, they tend to focus on concepts around events: think taps or swipes. They also focus on time and intent, as many screens on a mobile application depend on how much time one has focused.

    GA4 allows us to pull all these platforms together using data streams, providing a single source to view traffic and analytics across your entire portfolio of experiences.

    Events

    Once upon a time, we considered interactions at a page level to give us a view of a user interacting with our content, but that is no longer granular enough. In GA4, events like JQuery or CSS selectors allow you to drill into components of a page to listen for particular types of interactions with content. GA4 also allows developers to use custom code to create unique events they may want to track.

    Out of the box, google analytics helps us do this by automatically collecting tons of events. It also provides an easy interface to track enhanced events such as scrolling. Unlike automatic events, enhanced events are specific to how your website functions. For example, a blog that offers an infinite scroll to see new articles vs. pagination could use a scroll event to capture the equivalent of a new page view. Some websites may also find value in a hover action, something that may autoplay a video or animation.

    You can also create custom events that tie unique interactions and track those interactions across a multitude of devices. If you look at recommended events, one can easily imagine monitoring the purchase of a product or a login interaction across mobile and the web. These events also take parameters that can capture additional insights, such as how a user logged in – did they use Google, Facebook, or Apple to log in vs. a traditional username and password?

    A developer can also record custom events if the combination of automatic, enhanced, and recommended events does not cover your specific needs.

    Insights

    Google uses an immense amount of data to provide insights using artificial intelligence and machine learning. While how it works is a large black box, we do know that most websites use Google Analytics to track traffic and user interactions. This treasure trove of data allows Google to learn more about how our customers interact with the web and your competing website than any other company. It also can use this data to give us new trends or correlations that we may otherwise miss.

    These correlations are interesting, but it can be hard to make them actionable. For example, analytics might notice that iPad users have a higher likely hood of completing a purchase.

    I avoid using most company websites to complete commerce transactions and prefer a mobile device and an app. The app experience of using Touch or Face ID to unlock and easily store card options makes it much quicker. It’s not abnormal for me to shop on one device and complete a transaction on another. Analytics insights can help you find these trends.

    You can also create custom insights to see correlations quickly. For example, if revenue is up in the last hour or if conversions are strong in the state of New York.

    Reports

    To complete this move, Google Analytics hopes that we reconsider how we view a website or mobile app to determine success. To do this, it is eliminating many of the default reports and dashboards we have come to expect. Gone are views that reference antiquated terms like page views and instead, a powerful custom reporting tool that allows you to build reports that better reflect the KPIs you use to measure success.

    Migrating

    If you, like many, already have the older version of Google Analytics, migrating can be pretty easy and painless. First, off the existing google tag you installed will work for both Universal Analytics and the new GA4. Once done and associated as a data stream GA4 will also attempt to analyze site traffic to determine events and provide information it thinks you will find helpful. Of course, you can also customize events as discussed earlier to better track specific interactions.

    The biggest concern is that the data structures that underly GA4 is very, very different than the old GA. This means that you can not move previous data over. In this sense, it is less of a migration than a new and different GA4 setup.

    If this historical data is valuable or something you hope to refer to in the future, I recommend exporting GA data before the planned July cut-off.

  6. The Paywall

    I share article excerpts with friends and family frequently. Sometimes it’s something fun like a Buzzfeed listicle, but most times, it’s an article relevant to shared interests or recent conversations. Paywalls almost always put a huge wrench in this, to the point that a good friend asked if I could share a synopsis or quote since they can not access the greater article.
    I understand why paywalls exist and the importance of paying journalists what they are worth, but paywalls continue to feel like the wrong solution. I read various publications, many filled with ads, and subscribing to them is not in my budget. Some of my favorite sources of content include places that, regardless of how great an article may be, I avoid sharing or referencing because I know the burden I’m passing on to others.

    As I’m reminded of a great piece by John Gruber what I continue to see is the popular media having a smaller and smaller presence and relevance as they lock more content behind paywalls and inadvertently increase the amount of independent media sources. This is not on its own wrong, but as popular voices become inaccessible, they can’t speak truth to lies spread by more viral and independent sources.

    I would easily argue that paywalls and preventing access to content are behind the rise of false viral content being spread on social media. When the source is not available, you are limited to finding a new voice and message to trust.

    In addition to trust, paywalls must also fight subscription burnout. I’m from New Orleans and have lived in many cities beyond my hometown. As such, I struggle with the choices on which local news authority to subscribe to and commonly find myself disregarding articles shared by my parents or friends because I looked at my 3 articles that month for a daily that is no longer a daily for me. Like all subscriptions these days, I need to choose the options that bring me the most value, and sadly that limits me and prevents me from reading all the news sources I so wish I had regular access to.

    This situation baffles me because many governments hope to save our traditional media with fines targeted at aggregators and social networks like Facebook. Facebook is right. Why should they pay for this content? Others will duplicate, replicate, or write their own content and continue to diminish the value of the original source or quickly supplant them with a more modern Instagram or TikTok view of the media.

    In this post alone, I passed up several publications links because those would require you to subscribe to view. Something has to change, but time and time again, we have found that locking up content and preventing access is a recipe for decline. After all, when was Howard Stern last relevant?

  7. DALL-E

    DALL-E and the world of generated images have captured the attention and imaginations of many. When I first watched the video, humans need not apply; close to 8 years ago, the technologies felt possible but still distant. DALL-E alerts you with a flashing red light to how far we’ve come over a relatively short period.

    DALL-E is an AI-based system that generates realistic images from a string of text. What makes it uncanny is its understanding of the text and its ability to apply context and do all of this across different artistic stylings. Using the tool is addictive; if you have not, I suggest you create a free account and give it a whirl.

    Our CEO Todd has also turned me on to many other AI tools, like jasper.ai that allow you to generate blog post articles with a simple topic or description. While they may miss the depth and meat many expect in a well-crafted post, it is a shockingly great starter (and better than some content on the Internet).

    What I find fascinating in the new AI space are the same copyright issues we struggle to answer around ownership, especially when referencing prior art. For example, one can sample music and use it to make a new song, but we have defined a line that determines when a new song is unique and, in other cases, when the sampling requires the artist to pay royalties to the previous artist.

    In the case of tools like DALL-E, the prior art is exactly how you train a machine to create something new or unique. You give it as many samples of images and artwork as possible and provide it with metadata to describe each piece of work. It allows you to ask it to generate an image of a dog in the style of Van Gough.

    Is this a case of a new unique piece of art? To what extent is it based on the prior works that AI used to create this new piece of work? Are the uses of training sources any different than me asking a human to do the same thing? If one profits from the work, who should receive the royalty? The engineer who developed the AI? The company who created the AI? The license holder who typed a string of text to generate this new work of art? Or maybe the AI itself?

  8. Alexa

    The recent layoffs by Amazon targeting its device unit have sprung up many articles on Amazon’s inability to monetize Alexa – especially knowing that Amazon’s strategy has long focused on selling devices at cost and making revenue from its greater ecosystem. While sold at a cost, Kindle is a gateway product to Amazon’s extensive library of ebooks.

    Alexa as a product felt like the future was released into everyday consumers’ hands. This is not something that happens often, but the echo was a genuine awe-inspiring product.

    But looking at Alexa after ten years of constant development and iteration, it’s hard not to think of it as less of an awakening to a new way of interacting with computers but a few trick ponies. I long ago gravitated to Apple’s Siri for my house voice AI needs, not because I disliked Alexa, but because Apple has me very tightly in the grip of their entire ecosystem. Even with that, I rarely use a voice AI to do more than a few mundane tasks: Check the weather, play music, and control other smart home devices. I wish I could do more, but the promise of devices like Star Trek’s computer still feels very distant. Heck, using the word “and” is still impossible for the lion’s share of smart speakers or home AI.

    Many imagined that voice would become a new interaction stream regarding monetization. Instead, we have learned more about different devices’ values and the interactions’ pros and cons. Speaking has a ton of limitations. It requires a generally quiet location, privacy is limited to people who can hear, and listening takes more concentration than we realize. How often have you asked about the day’s weather only to not remember and ask as soon as your smart speaker is done?

    I love my smart speaker, and I still find Alexa a fantastic device, but I wonder what we all collectively want in Alexa 2.0.

  9. Welcome to the metaverse

    When Mindgrub announced our move to the metaverse, we wanted to explore the many sprouting virtual worlds and determine where to plant our virtual roots.

    The way people talk about it, the idea of a metaverse sounds like one world or one place you can enter to access a broad land of virtual content. The truth is: there is no singular virtual world (yet). Futurists imagine that instead of one world, the metaverse may parallel the internet. It might reflect a network of worlds that allows all of us to enter and leave at a whim. Facebook, now Meta, believes it has what it takes to help create that future. I’m not sure if their vision will win, but it’s helping us all see what the end could be.

    Suppose you find yourself, like me, office hunting in the virtual world. In that case, you quickly learn that what exists now is a patchwork of siloed communities, each with different levels of immersion, rules, and financial expectations. In many ways, these worlds’ quirky ideas and explorative nature give a feel of a new frontier to explore – reminiscent of the early days of the internet. Just the heart of this leads to one hugely important question.

    What is the Metaverse?

    Let’s start with the Wikipedia definition:

    “In futurism and science fiction, the metaverse is a hypothetical iteration of the internet as a single, universal and immersive virtual world facilitated by the use of virtual reality (VR) and augmented reality (AR) headsets.[2][3] In colloquial use, a metaverse is a network of 3D virtual worlds focused on social connection.”

    This definition limits what many may see as the actual metaverse. An immersive virtual world does not require virtual reality headsets – levels of immersion can happen on a computer in a networked 3D world. The foundation of the metaverse was built by communities like Second Life. Second Life is primarily known as one of the first virtual worlds with an expansive economy and a vast set of communities. In 2003, it was a pioneer, giving anyone connected to the internet a place to create a new life to live and explore. Many have spent thousands of hours immersed in this world. The key to that experience is being immersed, which defines a metaverse. I describe the metaverse as:

    An immersive network of interconnected worlds or communities commonly accessed through devices such as a phone, computer, and a virtual or augmented headset. These worlds can be used for dating, fun, social connection, work, or recreation.

    This definition better encapsulates what currently exists and what is possible. The key to this definition is immersion. Imagine using immersion on a scale similar to the six levels of vehicle autonomy:

    • Level 0 (No Driving Automation)
    • Level 1 (Driver Assistance)
    • Level 2 (Partial Driving Automation)
    • Level 3 (Conditional Driving Automation)
    • Level 4 (High Driving Automation)
    • Level 5 (Full Driving Automation)

    Vehicle autonomy is a scale that differentiates cars by their autonomous driving abilities. Having such a scale allows the US Department of Transportation to define better rules and regulations for a car based on how autonomous a person should expect a vehicle to be. On this scale, a level 5 vehicle would no longer need a driving wheel – it is so autonomous we can depend on it to handle all driving conditions and focus our time watching a movie or relaxing.

    These rules allow us to acknowledge the foundation of autonomous driving and see what the future will bring us. Many of today’s US cars, including a Tesla, come standard with technologies such as adaptive cruise control, parallel parking, blind side monitoring, and lane assistance, all of which rate as level 2 features. A scale like this also lets us pause and see how much technology has evolved in a few quick years while realizing that massive chasm of technical intelligence needed for us to move from a level 2 vehicle to a level 4.

    The six levels of human immersion

    If we keep those same six levels of vehicle autonomy in mind and use them as a template for the software and hardware that enables the metaverse, we get the scale of immersion:

    • Level 0 (No Augmentation)
    • Level 1 (Device Augmentation)
    • Level 2 (Augmented/Mixed Reality)
    • Level 3 (Virtual Reality)
    • Level 4 (Physically Immersive Virtual Reality)
    • Level 5 (Full Mental Reality)

    Level 1

    We as humans exist with no augmentation or connection to any reality but that we can see or imagine. We step into level 1 immersion with the assistance of a device, think game consoles, laptops, or phones. Each of these transfers you into an immersive land. Get lost in Second Life, World of Warcraft, Minecraft, or Roblox? Lost in the scrolling feeds of TikTok, Instagram, or Snap? These worlds exist now and function with whole economies, social interactions, rules, and regulations. Level 1 immersion tends to focus heavier on sight with the option for advanced audio. On a scale of immersion, it requires concentration and, sometimes, our imagination to remove our existing reality and truly feel enveloped.

    Level 2

    Level 2 devices connect us with a virtual community as an overlay of the real world. It can overlay the virtual or contextual information in the real world. The first notable example is Google Glass, which allows you to overlay directions or store reviews over the real world while looking around. It also expanded our idea of sharing by imagining the ability to let one truly see your viewpoint. Other older successes include games like Pokemon Go that use a phone’s camera to meld the Pokemon world with our own. Additional credit for products like Microsoft’s HoloLensnreal’s AR glasses, and Snap’s glasses.

    Other fringe devices in this space include AR Drones and game consoles that require physical toys to interact. These devices are less about the immediate plane but still invite a user to connect to reality in a different and more immersive way.

    The rumor mill continues to circle on an Apple device targeted to this level of immersion. We can only speculate what Apple may bring to the table, but the idea of contextual visual interfaces that evolve on Google Glass seems probable. In recent years Apple and Google have incorporated LIDAR and other stereographic sensors into their devices mixed with developer-friendly tools such as ARKit, making level 4 devices easier to bring to the masses.

    Level 3

    Level 3 requires a virtual reality headset that masks a person’s vision and, optionally, hearing, immersing them in a new world. A clear sign of level 3 is a device that attempts to remove you from your current reality as much as possible while offering an interactive and immersion experience. This means a device should allow interaction through head tracking, hand tracking, and an external gamepad. At level 3, a person should feel as if the sense of sight and hearing have been transported into a different world. Popular devices in this category include the Meta OculusPlaystation VR, and HTC Vive. Many of these devices may quickly move between an augmented (level 2) and level 3 world. Until recently, level 3 VR has mainly been a space for immersive games like Half-Life: Alyx and impressive demos, but the pandemic sped up the development of social spaces, games, and work environments for VR but many of these are new and early. In social, some notable names include Meta Horizons,, and . For work Spatial IO, and, . For games Roblox, , and.

    A tiny segment of the population still regulates virtual reality. Few have regular access to it, so the possibilities and impacts remain largely unexplored. Our content consumption is essentially 2D; for all the visual advances in movies and television, we still look at a 2D plane and primarily rely on audio to create the feeling of 3D immersion. VR changes the idea and opens the world of storytelling up into a different and much more immersive experience. A horror movie no longer directs you, but your experience and fear may change based on how you orient yourself to that world.

    Level 4

    Levels 4 and 5 often feel like a dream but are much closer than you realize. Level 4 devices must trick three senses. These often focus on sight, hearing, and touch, immersing your body in a different world. Level 4 devices commonly track movement to allow users to move around an environment or feel vibrations and feedback. The Meta Oculus Quest and Quest Pro are notable for allowing you to define a boundary and physically walk in those confines but mask this virtually to give users an infinite playground. CES is always great to see the many level 4 devices that take this idea further. Devices like an immersive body suit or glove transmit the feelings of touch or impact; walking devices allow a person to move, walk, or run in place; or even a rollercoaster.

    Those examples give a good taste of what is possible in level 4 devices and the amount of equipment needed, which makes it out of reach for many homes. At the same time, the technology gets more portable, and arcades, art exhibits, and other experiences open with immersive level 4 options. A new chain with locations in many major cities opened with arcades that offer real-life arena games, including virtual laser tag. The difference is in these worlds, you play with real people and feel the impact of others shooting at you. One experience I hope to try combines a satellite with sensory deprivation tanks to simulate floating in space.

    Level 5

    At the peak of our imagination and scores of anime like Sword Art Online is level 5 immersion. Level 5 requires an immersion that tricks every one of our senses sight, hearing, touch, taste, and smell. Imagine the ability to travel to a distant country and smell the countryside while tasting the food. That is the true pentacle of an immersive world – a place nearly indistinguishable from our reality. Much research and development is needed for level 5 immersion, but a surprising amount is coming from technologies focused on accessibility that have recently begun to converge with big tech. This technology is also further along than many realize it is.

    Researchers have worked on robotic implants, hearing technologies, assistive sight devices, and brain control for decades. Some of this has begun to merge into products for the everyday consumer. Apple, for example, offers AirPods the ability to alter our external audio or magnify it, similar to hearing aids. It has a watch using sensors to detect the movements on our fingers (or the muscles attached to them) to allow assistive touch control options. Elon Musk has a company called Neuralink that enables primates to play pong with their mind using brain implants.

    How immersed are we?

    Using our definition of the Metaverse, Mindgrub wanted an approachable environment that allows anyone to interact without the need to invest in a headset or other hardware. For Mindgrub, the environment we invite users to should embrace the best immersion possible without requiring more than a laptop or a phone. Accessibility on the run or while traveling feels like something that should be essential in an office environment. I believe that any true metaverse needs to be hospitable to varied ways of connecting. An individual should be able to cross many, not all, levels of immersion. Think of our current world, I may invite you to a Zoom, Amazon Chime, or Microsoft Teams meeting, but does that exclude you from connecting with a phone call? You may have a diminished experience, but as a tool, it includes and allows folks to connect how they can versus excluding.

    True magic happens when different levels of immersion mix. Each device offers a different perspective on the world and allows users to interact differently. Gamers may think of this as an MMO that allows PC users to play with console users (Playstation, Xbox, Nintendo Switch, etc.) – a keyboard and mouse can be very precise. Still, a joystick or gamepad can sometimes allow for faster movement. In the end, we have the same meta realm but different ways of interacting through the view of other devices. Of course, this mixing can be confrontational. It is weirdly believed that the precision I mentioned from a mouse and keyboard can offer an unfair competitive advantage to users on a gamepad.

    Regardless of strengths and different levels of immersion, we, as users, always judge our environments to determine the best option for our needs. A TV is far superior for watching a movie or a quick how-to video, but a phone’s ability to watch a video anywhere often supersedes the best experience.

    In our vision of the metaverse, this is crucial. It also leads us to three rules on what we expect for how we use the metaverse:

    • It should target the level of immersion that best deliveries the message
    • It needs to support devices from many levels of immersion
    • It should feel simple (and ideally effortless)

    The most important of these rules is #1. Much of the angst and unhappiness with some of Meta’s ideas around the metaverse is that it ignores looking for the right technology for the right message. When I call my parents, it is not to interact with my mom and dad’s virtual avatar but to connect with them. For those who FaceTime or Video chat over a voice call, it is to create that intimacy you can only imagine in person. We want to see each other and know that you look and feel well.

    Google has a fascinating research project that attempts to recreate a 3D physical visual representation of a person if given the option. That would be my favorite way to converse without having that person with me.

    Virtual reality allows a whimsical possibility that would not replace a video chat but will enable me to show ideas in a way I could not before. Some of the Disney Lucas art ideas of creating a world in a world are some of the most amazing things I have ever seen. How can we imagine a world of 3D cad printouts in 2D when a much better medium now exists to create this?

    Mindgrub’s new office

    Ultimately, we learned a lot and had to reimagine what the metaverse meant to us as a company. We did that by coupling our reimagining of the metaverse with our understanding of the vast landscape of existing technologies and tools.

    The number one rule we came back to is to target the best immersion level for the message we need to deliver. In many, many cases, some of the technologies – while aged – that we currently have done that very well. We use Slack, Zoom, and Google Workplace and find these to be ok – not always great – but ok tools for a lot f our work. These tools are not going away, and until something better comes along will not change.

    We quickly fell in love with Mozilla Hubs, an open-source world that can create around a well-backed and robust platform and framework freely. A structure that balanced openness by allowing you to produce a world and host it independently. This openness allowed us to let people experience one world using different devices of varying levels of immersion. It also allowed us to test the boundaries by pulling zoom, slack, and other of our go-to ideas into a place we could control and further iterate.

    For Mindgrub and me, the internet and the interconnected virtual realm are essential. While many technologies make up the tools we use as the foundation of the internet, we can move from website to website regardless of what was used to construct them. Hubs align more with web domains, each web domain or URL representing an independent world.

    I find Hubs kin to modern development frameworks (like Drupal, WordPress, or Node) – a solid bedrock with low-cost or free hosted tools to create an environment but with the ability to venture out and design and develop something different while rifting on the ideas of the others. With hubs, we can build a module or plug-in and decide to make that available to the more incredible world freely.

    I’m sure we will all have much more to say as we begin our adventure but in the meantime, feel free to visit and check out our “lobby.” If you like it, stay a while – VR headset optional.

  10. The algorithms did it

    Over the weekend, I read a piece on algorithms running the city of Washington DC. Articles like this frustrate me in their ability to take a word that once felt clear and make it crowded and confusing. Algorithm even when used correctly represent such a broad and vast definition that using the word necessitates a pause.

    I can write an algorithm that lets you determines if a number is even or odd:

    `function isEven(x:uint):Boolean{
      if(x!=0 && x/2==2){
        return true; 
      } else{ 
         return false; 
      }
    }`

    That is an algorithm, a set of instructions that solve a problem.

    Guess what? Alexa, Siri, and Google Search are built on algorithms. Do you know what else is? A calculator, a light switch, a TV remote. The number of algorithms we interact with daily is so massive that it may be uncountable. Pointing to a algorithm is like pointing to a singular and in a colony of millions.

    Any large system includes millions and millions of lines of code, many almost certainly rely on code written by other third-party developers, all focused on solving different types of problems.

    Articles that cite algorithms feel incorrect and morph the meaning by making the nature of math or development to be the issue. Developers write code that make simple and complex decisions based on product plans and designs. Those designs say what we anticipate a system to do and provide criteria to determine if it does what we ask. Code written and designed well will not lie. Could the code or a rouge programmer cause issues? Yes, but many times the true design decisions are made on purpose.

    If I want an application to find me, candidates, I need to establish criteria for a good candidate. To build a search engine that returns excellent results, I must first define what makes a great search result. This criteria is always subjective, and this is the actual question. Should we fear the mountain of algorithms or acknowledge that company-designed software is purposefully built to do something?

    Let’s not blame the algorithm and ask if the results we see may be as we configured or designed.