The Audiophile Experience: Loving What You Have

This past year, I’ve been quite obsessed with researching more about audio equipment. And by researching, I mean, mostly purchasing.

Based on what I’ve discovered, audio is a lot more difficult to experience compared to video. They instruct you quite a bit within an IT about how to manage video equipment with all the essentials like HDMI, DVI, VGA, DisplayPort, etc. and then everything related to display technology like In-Plane Switching, Twisted Nematic, and Vertical Alignment, but it really doesn’t feel like they train you much about audio. Perhaps it’s because most office environments just use HDMI for their audio or something like that. But it is kind of frustrating when there’s a whole area out there to explore that you’re just not trained on.

I’m definitely not what you would call a “sound guy“. In fact, most of the audio equipment that I’ve bought this past year barely scratches the surface. The equipment I bought mainly consists of Sonos speakers (which I reviewed in a previous post). And another pair of AirPods. This time, the AirPods Max. I initially had gotten the Sennheiser Momentum 4 Wireless after a recommendation. But I had decided to return them after a few months due to me finding them a little uncomfortable, as well as frustrating to pair with my devices. Specifically, the cut off an audio on Windows, the compression with using the microphone at the same time as the audio. A lot of these are really just major Bluetooth limitations that they don’t have control over. But for a device that showed an MfI label on the box, it really didn’t feel like one.

Bluetooth is surprisingly behind when it comes to audio. Perhaps it’s just a low bitrate, but for something that’s existed for a long time now, it’s still kind of irritating that using a Bluetooth device isn’t as seamless as you would hope.

Although the audio on the AirPods, Max doesn’t sound nearly as good as the Sennheiser headphones I had, they still do sound pretty good. And I would definitely love to get a pair of wired Sennheiser headphones. The only real problem is that I don’t feel like I know enough about audio equipment to feel comfortable purchasing such an expensive pair of wired headphones. I’m worried that it may not fit with a 3.5 mm headphone jack (something that a few of my devices do not have). And I definitely don’t want to have to carry around an amplifier with me everywhere to be able to hear the music.

While Bluetooth may not be a very good experience for headphones when a vendor can only work so much with a specific platform, using a cable is. I find it pretty cool how iOS automatically stops the music if it detects that you’ve unplugged the headphone jack. And of course, it’s pretty easy to switch devices when you’ve got a cable that you can hot swap.

Despite already owning a pair of second generation AirPods Pro, I still feel like I can just buy buying the AirPods Max, despite having far less features due to their older chip, given how well they are canceling out wind (probably due to their design), as well as just being larger with the audio.

I still continue to use the AirPods Pro, though. Mainly because I find them more convenient for more active experiences, and also because I don’t want to get my AirPods Max damaged from sweat. 

Lossless Audio

Audio quality is really difficult to distinguish. At least when it comes to video quality. Just like video, you need a piece of high-end equipment to properly distinguish the difference. But unlike video, it seems like our ears just aren’t as well tuned to spot the differences in sound compared to sight.

The Wikipedia article for audiophile seems to mention that a lot of the ways people try to identify which version of audio sounds better is mostly speculation. And there’s been a lot of pseudoscience in the process.

Lossless audio on the music streaming service I use, Apple Music, is really confusing. Mainly because it isn’t clear which path you need to take or what things you need to buy to listen to it at its best quality without just wasting bandwidth on sound you can’t hear.

Apple Music supports lossless audio in Apple Lossless Audio Codec (ALAC). However, the AirPods Max only support it through a cable (something I don’t have a problem with), and according to some sources, only the newest version of the AirPods Pro with the USB-C case. I’m going to wager that the newest regular AirPods also support it, but I’m not too sure.

None of the devices in the ecosystem support Hi-Res Lossless. But frankly, that’s OK, mainly because I’ve heard that nobody would be able to hear all the frequencies as high and low as it supports. And the option is really just there for marketing purposes. 

Switching between the wireless and wired modes on my headphones, I don’t feel like I can hear a difference. But sometimes I wonder if I listen to them enough, I will. 

Equalizers

Equalizers (EQ) are apparently one of many tools that sound enthusiast have to make their music sound better. The only problem is that I get nervous, touching any of the settings, because I’m worried that I’m going to ruin how the music was supposed to sound.

Part of me knows that this is a pretty nonsense argument. mainly because the way we all hear sound is incredibly subjective and none of us hear things the same way. But I’m just worried that if I adjust the settings too much, I’ll never be able to hear the song the same way again.

My Own Hearing

I have really sensitive hearing, and one of the things that probably blows away a pleasant listening experience is the fact that I set up volume limits where possible to be pretty low.

I have the specifically set up on my phone. Where iOS provide you with the ability to set up a hearing limit.

I do try to care about my ears. Especially given that I will need them for a very long time. 

Conclusion

At the same time that all I do this, I think to myself.

It’s really stupid to be thinking about all these different things when I could just be enjoying the music.

But at the same time, it’s also really nice to have crisp sounding audio. I’ve never been to any concerts or anything like that. So I really don’t have a reference point as to what real life music would sound like. I used to play the piano when I was younger and took lessons about it. But the store I took them were at an organ shop and was mostly keyboards and other electronic instruments. So I can’t really use those experiences either.

A part of me says that I just really don’t have a vision for buying all this stuff. My listening preferences primarily consist of soundtracks, pop music, and sometimes classical and new age (for focusing on workloads). and a lot of the stuff in my library is a mixture of stuff artists that you’ve probably heard, and a selection of ones that you haven’t.

Either way, music is pretty important to me. So I’d like to continue listening to it whether that’s just in the background, or actively paying attention to it. 

Vibe Coding: How to Amplify Your Technical Debt

So recently, there’s been this new term describing a new phenomenon that’s starting to go around.

It’s called Vibe Coding, and it basically involves people writing code entirely through the use of Generative AI. Not just small snippets they need help.The. Whole. Thing.

I just want to get this off the table. I think this is a really bad idea. And I’ll admit, that I’ve ended vibe coding a few times. Writing out elaborate prompts that build a whole app or system. But more often than not, I’ve ended up abandoning those things for one reason or another. Or they’ve ended up having bigger consequences (such as deleting an entire bunch of video files I had wanted to download and organize in my media server based on file name).

My worst moment with it involved a script I had made for self hosting various docker services split across various compose files. The script was supposed to go into each directory, spin up or down the compose file in each one (depending on the flag), and thus save me a lot of typing out for different files.

Eventually though, the script started getting too complex to maintain as different sections started making no sense. Exacerbating the entire problem was an inexplicable bug involving compose containers being given the wrong names, and it was enough for me knock down everything and just move to Proxmox.

The lack of inspection of what Copilot was outputting, coupled with a general lack of knowledge of what I was doing made it difficult to figure out why things weren’t working.

Things like code autocompletions and quick prompts to generate small snippets aren’t inherently bad. But once you start taking advantage of a tool like GitHub Copilot, it can be really hard to stop relying on it and start abusing it for things you have to do. Especially as you want to start getting things done sooner and out of the way.

It’s why companies are constantly pushing AI down our throats, and wasting millions giving it away to students and other businesses in hope it will get them hooked on it. They are fully aware of the power that these tools have, especially in a society that favors quantity over quality, and have the upper hand in pedaling these things.

It’s not just that Large Language Models don’t have an understanding of the text they output, it’s also the lack of knowledge you discover along the way. It’s certainly not always easy to find. And can often take longer than writing the actual code itself. Which is also what makes using a Large Language Model for code assistance appealing to begin with. Write now, understand later, build up more technical debt in the process.

I’m looking through the r/vibecoding subreddit, and it’s kind of horrifying to see how others are just enthusing around writing sentences to write code with very little knowledge about writing code in while also encouraging others to do the same.

Some of the posts here (and I know I shouldn’t be believing everything I read on the internet) are just horrifying to see in terms of incompetence, others are really funny, like this one with one user mentioning all the different things they’ve built, and then just.

Later today I am going to build a billboard system for my local theater so that they can plan and track all of their movies between offices.

I am loving this.

EDIT: The billboard project is MUCH bigger than I thought that it would be and is taking a lot more time than I anticipated. That’s okay, it is helping me to learn good practices for Vibe coding.

Basically, the realization that trying to work with a business to meet it’s needs requires much more than the ability to just prompt your way out of coding.

As for the more horrifying stories, slightly less related to coding. I heard on that subreddit about a person who managed to become a data analyst after having worked in the automotive industry… By learning from AI.

It’s impressive as a challenge, to try and see what you can do when the control is locked away from you. And I’m certainly not against there being systems that enable anyone in the world to start creating their own software. But it’s also easy to start building up a sense of false confidence as you begin to think that you are an expert at what you are doing.

The risks may not be as high for personal offline projects, or code that’s just needed to automate something once and never again, but my general anxiety lies in the fact that people are deploying entire services onto the internet with not only very little knowledge about hosting online services, but not even what exactly they’re hosting to begin with.

When you host an off the shelf service like Nextcloud or Uptime Kuma, you don’t need to know everything about the service you’re hosting. As long as you have a good knowledge of how the internet works, awareness of common vulnerabilities and maintenance, and you read enough of the documentation to gain an understanding of the software you’re deploying, you’re usually good to go.

But many of these people are just typing out a few sentences, taking the output, and deploying something they don’t fully understand beyond a rough description (that may not even match what they wrote), and exposing it to the internet.

There’s already been a few cases of people having their vibe coded services pwned with minimal ease. And with no knowledge of how your system might work or what parts might be included, I couldn’t imagine trying to do an autopsy on such a system.

It’s not “trendy” or “hip” to be building entire projects without an understanding of what you’re doing. It may be fun as well, but it can also be really dangerous. Especially if you don’t have a plan or vision for what you are doing.

So please, be careful with AI tools, and don’t end up like me, or any so-called “vibe coders”.

3D Printing

So these past couple of months have really been the prime time for me to finally get ahold of a lot of the things that I have always wanted. I managed to get a robotic vaccum, a nice set of speakers, and, of todays topic, a 3D printer.

The specific model I got isn’t particuarly exciting. It’s a random Chinese brand called Flashforge, and the model is the Flashforge Adventurer 5M. I got it at the beginning of February, and it arrived while I was sick. But I immediately pulled the strength that I had to haul it downstairs and set it up.

During the process, however, I accidentally managed to break the hinge on the display. So, after a few weeks of waiting for a replacement and fiddling with the display ribbon more to get a minimal amount of picture to set up a few things, the printer had been fully completed.

3D printers are interesting because they’re not exactly like regular printers. Unlike a regular printer, a 3D printer isn’t always successful, and it’s pretty rare that you would need to print out something again with a regular printer (barring running out of ink, paper jams, etc.)


But the entire 3D printing process is a very intricate one, one that involves precision and is codependent on everything being exactly right. Of course, if you maintain your printer well, you don’t have to worry as much about it messing up. But even then, not everything is going to come out exactly the way you want it. From what I know, a 3D printer prints by following the instructions it is given. These instructions simply command the printer to extrude filament that is hot enough to bend into shape, but cool enough that it quickly hardens so that another layer can be placed on top of it. Repeat this process hundreds of times, and eventually, you end up with a finished product.

As you might guess, just like building a house by laying all the bricks together, this is a delicate process, and a desired result isn’t always garunteed to happen on the first try. Problems I’ve already ran into include:

  • The filament gets jammed. Leading to nothing being printed as the nozzle moves around
  • The build is knocked over, causing the printer to try and lay filament on air (that falls to the plate, creating a mess
  • The printer extrudes too much filament at once. Resulting in a giant mess of filament stuck together at the nozzle
  • The build gets stuck to the nozzle. Leaving an unfinished build with a giant goopy substance attached to it.

Many of these problems can be fixed with some common troubleshooting I picked up in my A+ certification, and often just involve running some functions from the printer itself. But it definately doesn’t detract from the dissappointment of a failed model and all it’s wasted filament.

Why I wanted a 3D Printer

In addition to thinking that they were cool, I wanted to be able to print a lot of the things that I had been considering ordering off of Amazon, but didn’t really want to because it just didn’t seem worth it to spend 10-30$ over and over again on random things made out of plastic. Especially compared to making a one time purchase, plus multiple purchases of filament that could could make a lot of those things. Coupled with a little DIY, I could probably save a lot of money and make something that looks just as good, and more customized as well.

At least right now, I can do that, provided there’s an STL file available. In terms of creating models, I’m only a little bit good with Blender, and FreeCAD feels like it’s way out of my territory for something I could learn. A lot of the stuff behind it feels like something that an engineer would do (I wanted to be a software engineer at one point, but only because I thought that the title “engineer” sounded really nice and important).

Instead of going off that though. I think I would rather just infodump a lot of different information about the 3D printing pipeline so you can get an idea of it and how interesting it can be.

How the 3D printing process works

The printing process starts with the model you want to use. This would probably be in the format of a Stereolithography (STL ) file. There’s a few other formats as well, but this one is the most common and compatible out there.

Whether you made this model, or downloaded it off the internet, it’s first stop is the Slicer. For me, this one is OrcaSlicer, an open source application based on PrusaSlicer and BambuStudio.

The slicer is where the model is inspected, adjusted, checked if it’s physically possible to print as shown (and make adjustments, such as adding supports, if needed) and converted into instructions that the printer can use to adjust it’s settings and know what paths it should take for extruding filament. It converts the model into a programming language called G-code (which was first made in 1963, according to Wikipedia). It kind of looks like Assembly with code that looks like this:

;WIPE_START
G1 F3000
G1 X-21.395 Y-9.05
;WIPE_END
G1 X-14.638 Y-5.5 Z.6 F30000
G1 X.099 Y2.242 Z.6
G1 Z.2
G1 E.8 F2100

Basically, a lot of function names and commands that instruct specific adjustments the printer should make to it positioning and settings. The G1 command specifically tells the printer it should move the print head in a straight line.

There are other commands as well, each of which look something like the ones shown but with a different number and letter. And tell the printer a variety of things from:

  • Adjusting the fan speed, plate level, and extruder temperature
  • Starting and stopping certain components
  • Turning and moving the print head in different directions

Overall though, it’s not particuarly important to know this if all you want to do is print. But it’s good to know so you can understand whats happening, especially if you want to start playing with more advanced, open source firmware like Klipper that rely more on you setting up different macros and having much more fine grain control of your printer.

The point is, the slicer converts the model into G-code, and gives you a preview of what the model might look like when it’s printed. From there, you can either export the G-code file, place it onto a flash drive, insert it into the printer, and print it from the on screen menu. Or in my case, directly send the gcode file to the internal storage of the printer and automatically start printing.

If it helps any, you can think of the slicer as essentially being the Print Dialog Box you get whenever you press Control/Command + P. It has a ton of options, but chances are, depending on what you are doing, you may only need to press a single option, or none at all. It just so happens that printing a 3D model to a 3D printer isn’t something that one does everyday to justify including a slicer in every app.

What can I Print?

The thing that I have printed the most, with 3 of them. Is #3DBenchy, which is a popular little tugboat model that serves a calibration model of sorts. It’s symmetrical design with lots of overhangs and caves, coupled with it’s ability to be printed in place (sliced without changing any settings), it’s quick print time (a little over half an hour for me) as well as how cute it looks, makes it a popular model for testing if your printer is working correctly. In fact, as of right now, it’s actually one of the most downloaded models on Thingiverse, and recently entered the public domain as well.

Conclusion

These printers, while a little cumbersome to work with, and involving some things that are a little bit beyond my scope of knowledge. Are ultimately very interesting to work with. As long as you make sure to adjust your expectations, and know what it is that they can and cannot do. I would recommend getting one if you have the ability to.

Sonos Ecosystem Review

So, a while ago, there was this huge uproar over this company called Sonos, which is a manufacturer of wireless speakers, mainly because of a new version of their app that they deployed,

While the whole idea of this uproar was essentially to stay away from the speakers, it drew me in closer to the idea for some reason.

Basically, to talk about what makes Sonos speakers unique, there’s a little bit of background. In addition to being relatively high quality speakers (disclaimer: I’m not an audiophile, nor am I affiliated with Sonos), Sonos speakers are essentially within their own ecosystem that consists of speakers capable of having their audio playback grouped, ungrouped, and transferred between speakers. If this sounds familiar to what AirPlay (also available on speakers) or what select Google Cast devices can do, that’s probably because Sonos was the one to popularize the concepts behind it.

What separates Sonos from these devices however, is that unlike these services. Sonos speakers play audio directly from a streaming source. Which can be configured to either be a popular streaming service, or a local SMB library of your own music, and this is mostly the only thing they do. In addition, these speakers aren’t just individual endpoints to play to, multiple of the same speaker can be setup to work as if it was a stereo system. Essentially, these speakers let you create an entire home audio system without the need for excessive wiring or drilling through your home.

The app itself functions essentially as an audio remote, as well as the main place to manage the settings for your speakers and quickly setup new ones.

A screenshot of the Sonos App on an iPhone
The Sonos app is fairly simple. It’s primary purpose serving as a controller for your audio. With the media player at the bottom pulling out to reveal all the speakers on your network. Changing the output to another controlled device is as simple as tapping it from the list.

Despite needing to create an account for speaker setup and music streaming linking, the app has no cloud connection to it unlike most IoT apps. Meaning that you need to be connected to the same network as the speakers in order to control them and adjust their settings. However, other users can also download and join an existing system to their app, as well as start playing and controlling audio without needing to sign in or create an additional account (signing in is needed in order to adjust settings, however)

One of the major benefits Sonos puts off about their app is the fact audio can continue playing, even while there’s a phone call. Since the app doesn’t rely on any system media players and simply streams the audio directly to the speaker. I’ve personally found the app a lot nicer for playing streaming radio stations (something I can get through Apple Music as well as their own in-house, Ad supported, radio service. The latter I only really use for the Ad-free white noise and rain stations)

While I’ve never seen the original app, I can tell that the new app is likely built on some cross platform framework, and, while not as drastic as some users have complained (much of the work at Sonos had gone into damage control over the app as newer products were pushed aside, so it’s likely many of the greater issues had been worked out), I have had a few hiccups here and there with the mobile apps.

Other features

Music playback aside, the app lets you set up quite a few other features as well. For example, alarms can be setup within the app. Which basically just play a predetermined song at a certain time. There’s also sleep timers that can be setup as well, which basically work like most sleep timers on other platforms where the song fades out after a set amount of time.

The speakers themselves also work with Apple AirPlay, allowing Apple devices to cast audio to them, or asking a HomePod to play audio in a specific room. And can be setup to be controlled from Alexa, as well as Apple and Google Home. The former being a little more limited, and can only actually control playback from speakers currently using AirPlay.

One of the other features that can be setup is Trueplay, which as an equalizer that is setup either using an iphone held upside down and waved around the room, or by using a speakers built in microphones to measure the acoustics of a room, and adapt the audio to it. Whether or not you hear any enhancements will depend on the shape and size of your room, as well as the method of calibration you use. I personally found the former to sound a lot better.

The aforementioned microphones also can double up for speech recognition with a voice assistant. With Alexa being the main supported one through the Alexa built-in program. While I don’t actively use Alexa in my home. Alexa on the Sonos speaker is better suited as being a satellite to a home already using an Amazon Echo as opposed to being the main set of devices due to the inability for a Sonos speaker to function as a Matter controller for neither Wi-Fi nor Thread.

In addition to Alexa, Sonos also maintains their own in-house assistant as well called Sonos voice control. It mainly just functions as a companion to the main app, letting you perform some of the most common functions of it with our the need to actually open it. But other than that and a few other minor things, that’s mostly all it does.

API

In my opinion, one of the biggest saving graces to Sonos over the app fiasco was their well crafted local API. While they do have a separate cloud based API for more cloud based control (such as creating content providers the speakers can stream from, or 3rd party web apps), there’s a more local API as well relying on open standards like UPnP. While it’s not as documented as their cloud API’s, it’s still great that the speakers even have such a local API. While it won’t really save anyone if the company shuts its doors and all the cloud functions go offline. It still gives more ways for other apps to latch on.

One of these apps is on my list of favorites… Home Assistant. Sonos is actually a featured integration within HA. And the experience really shows as speakers are automatically discovered and added, and also update in real time. Including a couple of HACS cards that can even show you the exact playback time on a speaker.

Ironically, while writing this. The Sonos API documentation introduction has actually given me a better understanding of what Sonos products do compared to their actual marketing pages.

Verdicts

Personally, I would recommend Sonos to anyone with an Apple device, or anyone who is looking for something more streamlined for music compared to the bells and whistles of most smart speakers.

Compared to the competition of other smart speakers. The speakers can seem much more limited at a much higher cost. But overall, Sonos is really a product category that isn’t really in that of smart speakers Like Echo or HomePod. It’s not just a regular Bluetooth speaker (though some do have that functionality), but being an assistant isn’t it’s goal either.

I have found Sonos speakers much better for playing audio in part thanks to their well designed (if not well coded) app. And it feels like a device I actually want to play music to (and feel a twinge of guilt constantly that I’m not making the most of the one sitting on my desk aside from falling asleep to white noise). While AirPlay suggestions on iPhone make it a little easier to find a device to play music to, casting from any device can sometimes be frustrating (especially during the process of disconnecting). Considering the way you get most smart speakers to play music is by asking for a specific song, which can be wishing upon a star sometimes as you get the same song, but from a different album, or another song entirely. If all you are wanting to do at a given moment is just play something around the house, the app feels really well designed for that, even if it requires a few extra seconds of grabbing your phone. And the fact I can share that ability with my family, without the need for them to go through the process of signing up, was a really great design choice.

My only real concerns about Sonos have been a few class action lawsuits around the company and some general uncertainty around the amount of security updates products get, but other than that. They do work exceptionally well, and I wish I had the ability to acquire more of the speakers for my home.

Loss of Computer Control

The Spring 2025 Semester has been quite a bit seething for me. Mainly because I feel like I’ve been helpless. There’s been a ton more group work to do, and a lot of assignments that have forced me to bend over backwards in uncomfortable ways that have resulted in me needing to do more work than I need to . While this could be a fear of change taking over. It’s mainly just that when you hear the excuse “well you better get used to it because you’ll have to be doing this all the time at work”, at least at work, you get paid for your trouble, which gives a valid incentive. When a college tuition is thousands of dollars for hundreds of hours of work, you expect that you should just be able to submit things without much hassle.

I should probably start out by actually describing the way I work:

The way I work involves 85% preparation, and 15% work. I spend more of my time getting setup and ready than I do actually working. As such, I hate having to set everything up again.

When you’re doing online learning, that means most of the time, software is going to have to be your lifeline to doing anything. And these are the apps that have been giving me the most trouble:

Discord

One of my biggest frustrations this semester has been Discord. Discord isn’t (as far as I can tell) an officially supported University application. But that hasn’t stopped the students and professors from relying on it.

Discord is a frustrating app that goes beyond its status as nagware. It’s a fundamentally flawed app with serious usability issues, and while quite a few of it’s issues have been solved over the years (including the addition of an official server rules onboarding process, easier role selection). The process of engaging with a server still feels more intimidating and cumbersome, especially when you have a goal in mind, than it should be.

My second biggest gripe about Discord has, and always will be its blatant usability issues, especially surrounding moderation and server management. Discords power for collaboration can only be obtained by someone willing to expend hours configuring it correctly, and there are plenty of servers (especially for smaller, more casual conversation ones) that simply do not need 6 dozen channels of everything from #general, #memes, #cooking, #vacation-photos, etc. And simply try to police the flow of conversation without actually questioning if there is much of a need to do so.

While bigger, more prepared servers may not have these problems, and may even have a genuine need for creating. For an average person looking to set up a small group. Discords server functions lull them into a false sense of superiority that they have more than everything they need, but in reality, have actually very little need for it.

It’s choices to eschew a channel selection list (similar to what Slack offers, or in most IRC clients through /list) in favor of simply muting channels and toggling “hide muted channels” means needing to wade through the entire list to find every channel I will never visit and turn it off.

One of my professors has made a server that includes channels for literally every single one of his classes. Resulting in an extremely long laundry list of channels to wade through. His rationale for us creating yet another account (and also stating that the web app will not suffice for the class despite only ever needing to sign in to the server for an initial set of points for the class) is simply because of wanting to bypass Slacks message retention (despite having plenty of other professors who live with it).

This alone puts me at a disadvantage, because while the University may have access to plenty of other, more accessible, less obnoxious options. Many will simply stick with what the professor recommends because they simply don’t care.

I won’t blame people too much for not liking email. It can be frustrating to learn not only the process of working in email threads, but email just isn’t acceptable for long term collaboration among most people. The Linux kernel may be able to develop through one. But that’s also because of the use of an external tool like Git.

The biggest gripe that’s been impacting me about Discord however, has been it’s absolute lack of account flexibility that likely stems from the fact that Discord wasn’t really built to ever be a productivity platform to begin with.

I have two Discord accounts, a personal one with my online alias, and a more limited account using my real identity (to comply with my professors demands) with it set with a ton of privacy restrictions and notes on the account to discourage others from contacting me there as much as possible. I even noticed 40 or so days after that my professor requested to start a PM with me on the platform (having toggled off direct messages and turning on message requests). Perhaps it could backfire one day, but at this point, I don’t care.

My main gripe about all this however, is how difficult it is to switch accounts on Discord. It’s flat out not possible without logging out on mobile, and even on desktop, while you can still be logged into multiple accounts, only one session can be active at a time. This makes it incredibly difficult for me to balance classwork and life.

It essentially feels like a punishment for being more conscious about online privacy and not treating everything like Facebook where you use your real life identity for everything.

Google Workspace

Google Workspace is a little more tolerable, but I mostly dislike the vision of Google. The vision of all of your work being done in a web browser (especially Google Chrome), coupled with a heavily abstracted “cloud” that hides away most of the system. My mother recently had to change jobs as well to a new one that uses Google Workspace after having used Microsoft Office for 20 years, and the change for her has been quite rough.

I’ve had to use Google Workspace (formerly G Suite, formerly Google Apps) ever since middle school, when the district decided that they would be implementing a 1:1 learning program with the cheapest, most sluggish, most locked down Chromebooks possible. No BYOD, no looking at alternatives, not even changing your homepage or miserably insecure default password (which happened to be are fixed student ID for lunch plus two zeros at the end).

Simply put, the entire system was built around making sure that there were absolutely no excuses for a K-12 student to not be able to get any work done by locking down the system so much that there would be no way to break it.

The result was a Chromebook we had to carry around in the bulkiest case possible (there were technically reprimands for not doing so, but nobody really cared). And the system only continued getting worse. In high school, they announced that they would be scaling back some the government mandated web filtering (In the US, the Children’s Internet Protection Act of 2000 requires all public schools and libraries to implement filtering for pornography and other obscene content in order to qualify for government funding) in favor of an MDM mandated monitoring and blocking web extension.

This extension was miserable, while others had been used before it pretty much eradicated all hopes of being able to get around the system, even if you had a good reason to. It also monitored your every page and sent it back with proactive alerts (I would know because I was boredly browsing Wikipedia during class one day, clicked on the article for Suicide, and promptly got a call down to the councilors office 15 minutes later).

This, coupled with other recent programs in high school, such as the recent (now also considering to be mandated by some sates. And is also the primary reason I invested in Apple Watch) of placing your phone in a holder, resulted in a lot of frustrations that you don’t get much control over.

I can’t get too mad in some places about the system. Schools, especially publicly funded ones, are always miserably underfunded by the government. And this was during the mid 2010s when everyone knew they needed to get future students ready for working with the computer, but simply do not have the IT resources or funding to make this happen. And considering that Google, in a completely unsustainable move that would hook customers, offered schools unlimited storage for years until they didn’t. They managed to get a lot of bait, and make the concept of Degoogling unrealistic for many young students (but that’s a topic for another post).

Thankfully, during the second semester of my senior year in 2021, I did manage to sneak out a bit more freedom in working the way I wanted as the COVID-19 pandemic forced the school to pretty much improvise a lot of things (even with systems they already had working), so I pretty much had the ability to take advantage of the clutter by bringing (I was technically even exempt from wearing a mask due to having a disability, but I chose to wear one anyway out of the greater good)

Having access to better computers now and not just the cheapest netbooks possible, Google Workspace isn’t nearly as unbearable as I found it in high school. But it still isn’t a favored platform compared to the tradition of local desktop editing.

Anyway, back to Google Workspace specifically, my University pays for both Google Workspace, as well as Office (while disabling OneDrive). And of course, having this many options, means that professors basically get to mandate what gets used. And simply put, right now I have an assignment that only accepts a URL for a submission, and not a DOCX file for whatever reason.

(Yes, this is a wall of text that has mostly just listed a bunch of minor inconveniences . But I really want to drive just how much my tolerance for these things wasn’t just some singular mishap, but something that has been continuously eroding for years.)