Vibe Coding: How to Amplify Your Technical Debt

So recently, there’s been this new term describing a new phenomenon that’s starting to go around.

It’s called Vibe Coding, and it basically involves people writing code entirely through the use of Generative AI. Not just small snippets they need help.The. Whole. Thing.

I just want to get this off the table. I think this is a really bad idea. And I’ll admit, that I’ve ended vibe coding a few times. Writing out elaborate prompts that build a whole app or system. But more often than not, I’ve ended up abandoning those things for one reason or another. Or they’ve ended up having bigger consequences (such as deleting an entire bunch of video files I had wanted to download and organize in my media server based on file name).

My worst moment with it involved a script I had made for self hosting various docker services split across various compose files. The script was supposed to go into each directory, spin up or down the compose file in each one (depending on the flag), and thus save me a lot of typing out for different files.

Eventually though, the script started getting too complex to maintain as different sections started making no sense. Exacerbating the entire problem was an inexplicable bug involving compose containers being given the wrong names, and it was enough for me knock down everything and just move to Proxmox.

The lack of inspection of what Copilot was outputting, coupled with a general lack of knowledge of what I was doing made it difficult to figure out why things weren’t working.

Things like code autocompletions and quick prompts to generate small snippets aren’t inherently bad. But once you start taking advantage of a tool like GitHub Copilot, it can be really hard to stop relying on it and start abusing it for things you have to do. Especially as you want to start getting things done sooner and out of the way.

It’s why companies are constantly pushing AI down our throats, and wasting millions giving it away to students and other businesses in hope it will get them hooked on it. They are fully aware of the power that these tools have, especially in a society that favors quantity over quality, and have the upper hand in pedaling these things.

It’s not just that Large Language Models don’t have an understanding of the text they output, it’s also the lack of knowledge you discover along the way. It’s certainly not always easy to find. And can often take longer than writing the actual code itself. Which is also what makes using a Large Language Model for code assistance appealing to begin with. Write now, understand later, build up more technical debt in the process.

I’m looking through the r/vibecoding subreddit, and it’s kind of horrifying to see how others are just enthusing around writing sentences to write code with very little knowledge about writing code in while also encouraging others to do the same.

Some of the posts here (and I know I shouldn’t be believing everything I read on the internet) are just horrifying to see in terms of incompetence, others are really funny, like this one with one user mentioning all the different things they’ve built, and then just.

Later today I am going to build a billboard system for my local theater so that they can plan and track all of their movies between offices.

I am loving this.

EDIT: The billboard project is MUCH bigger than I thought that it would be and is taking a lot more time than I anticipated. That’s okay, it is helping me to learn good practices for Vibe coding.

Basically, the realization that trying to work with a business to meet it’s needs requires much more than the ability to just prompt your way out of coding.

As for the more horrifying stories, slightly less related to coding. I heard on that subreddit about a person who managed to become a data analyst after having worked in the automotive industry… By learning from AI.

It’s impressive as a challenge, to try and see what you can do when the control is locked away from you. And I’m certainly not against there being systems that enable anyone in the world to start creating their own software. But it’s also easy to start building up a sense of false confidence as you begin to think that you are an expert at what you are doing.

The risks may not be as high for personal offline projects, or code that’s just needed to automate something once and never again, but my general anxiety lies in the fact that people are deploying entire services onto the internet with not only very little knowledge about hosting online services, but not even what exactly they’re hosting to begin with.

When you host an off the shelf service like Nextcloud or Uptime Kuma, you don’t need to know everything about the service you’re hosting. As long as you have a good knowledge of how the internet works, awareness of common vulnerabilities and maintenance, and you read enough of the documentation to gain an understanding of the software you’re deploying, you’re usually good to go.

But many of these people are just typing out a few sentences, taking the output, and deploying something they don’t fully understand beyond a rough description (that may not even match what they wrote), and exposing it to the internet.

There’s already been a few cases of people having their vibe coded services pwned with minimal ease. And with no knowledge of how your system might work or what parts might be included, I couldn’t imagine trying to do an autopsy on such a system.

It’s not “trendy” or “hip” to be building entire projects without an understanding of what you’re doing. It may be fun as well, but it can also be really dangerous. Especially if you don’t have a plan or vision for what you are doing.

So please, be careful with AI tools, and don’t end up like me, or any so-called “vibe coders”.

Videos

One of the biggest benefits that the rise of consumer technology has brought to us is that it’s essentially let anyone become their own videographer.

Unfortunately, that shift has rooted itself way too deep into the society. And while it’s great that anyone can make their own video about whatever they want, and then share it with the rest of the world, it’s become obnoxious that there’s now this codependence on the consumption of video for understanding the world around us.

I see this generally as I’m mostly complaining about the classwork I’ve been doing that has required me to make videos, despite my degree having nothing to do with video production or broadcasting. It’s a process that’s distracted from me getting other things done (regardless of how much I actually have to do it) as I need to go and record a bunch of things, then stitch them all together in iMovie. And with university requirements on closed captioning. It’s a process that feels even worse.

I genuinely have nothing wrong with the closed captioning requirement. But closed captioning is also a process that requires writing out a better script and having a general understanding of what you are writing about to avoid as much prose as you can. and in general, the fastest way has just been to upload to YouTube and use their captioning services there. Which also feels like it violates the spirit of the rule. 

But honestly what makes the whole video production process obnoxious is the lack of information I have on it. I may have experience working with several video editing tools, but I certainly don’t have any experience with designing a good informative video. And while others, as part of their jobs, may be able to whip out videos quickly before a deadline. I don’t have that kind of workflow. 

It’s true that some things can only be understood in the form of a video. But there are a lot of things that also genuinely don’t need it explained like that. I feel like throughout my classes, I’ve had to waste a ton of time editing and putting videos together just to avoid the inconvenient feeling of having a watermarked submission there’s some cheap third-party tool.  and then worrying if my editing is too fast or slow paced that an instructor is going to be able to understand it. 

Many people don’t even seem aware that their devices may even be able to record their screen without the use of third-party technology. And the ones that don’t clearly don’t seem to know about applications like OBS they can do everything watermark free.

Of course, OBS can also be a difficult app to work with if you’re just wanting to do some basic recording. Especially given that an app also built for livestreaming. Which leads me back to square one, making videos are not easy. And we shouldn’t be asking every day folks to be compiling their findings into presentations that end up going either too long, or far too short (definitely looking at TikTok here).

In addition to it being difficult to make videos, finding the information you need in one isn’t easy. Especially when trying to solve a computer problem.

Several times while looking for guides on configuring certain software, I’ve encountered online articles that are actually just the transcripts for a video. So they’ll be referring to things that there is no context to because they’re entirely on screen.

It’s not to say that I don’t like watching videos at all. It’s just that it’s frustrating when the only source of information you can find for something is a video. And since you can’t just use the find text shortcut to jump around a specific portions as you need. you have to keep track of a long playback bar and repeat small segments over and over again. And there’s also just sitting around waiting for the specific part you’re looking for to come up to if the person or platform you’re using doesn’t include time stamps.

In addition, while everyone can read at different speeds, everyone is forced to watch a video at the same speed. Sure, many clients allow you to adjust the speed of the video. But now, unlike reading, where you can tell if you might be going too fast if you’re not understanding things, and slow down if you need to. A video is always going to go fast, and adjusting the speed any further doesn’t guarantee that you’re going to be picking up anything further.

Google seems to be aware of the mess that they’ve created with YouTube and it’s impact on society. So, using the Next Big Thing, they’re putting everything uploaded to their website through their information shaped sentence generator so you can get a water down, potentially incorrect digest of bits and bobs here and there.

AI kind of feels funny here because it shows you just sometimes how messed up society‘s workflow as a whole is. And all the weird bending over backwards, we have to do to get things done. Like sending arbitrary amounts of emails, engaging in meetings that could’ve been emails, or making large video presentations that could’ve been an email.

Writing a lot of it out (mostly just saying it aloud, because I do a lot of the writing for this blog through the dictation function on my phone), it makes me realize just how much people hate reading. Perhaps there’s a bit of a literacy crisis maybe? I’m honestly not sure. But it actually does almost look like it could be a sign of a bigger problem. 

The internet

The internet may be becoming increasingly threatened and completely controlled by large, privatized corporations, but it’s still ours:

Not just mine, not just yours, ours.

It’s a place to be weird, yourself, not nobody. It’s a place for you to be whatever you want, wherever you want. Whether in elaborately constructed animal character, or a silly JPEG. It’s a place where communities can gather, and cultures can connect,

It may seem like a hedge maze controlled by only 30 or so groups, but it does have exits. Exits that lead to wonderful places. And when one place crumbles, a new one can form just as easily from someone else. And the cycle continues when the same happens, much like life itself.

Running a spot on the Internet may not be the easiest task, both mentally, with all the security measures, updates, and constant maintenance that needs to be done. And financially, acquiring the right hardware to host, paying out monthly fees for hosting an infrastructure, finding the “right” internet, as well as registering and renewing domains. But when you have the ability to reach a large amount of people, it can feel breathtaking to see. Especially without the beauty filters of engagement bait.

Even as robots patrol the tubes, fibers, and radio, snagging whatever data they can for themselves. And then flood them back with hastily crafted sludge. But amount of what they do can ever completely drown out those that make it possible.

The right connections make a large difference, and while big tech may have warped the masses idea of the web with large, free playgrounds that are hard to compete against. It can still be taken back. It’s not easy, but you don’t have to take back all of it.

I have to think about the kids ’90s guide to the internet, and the amount of naivety it shows. Well it still shows a lot of massive corporations in groups highlighted, the ’90s internet still looks like it had a vibe to it. That was fresh, and new. I can’t say for sure, because I didn’t use it at the time, but that’s at least how it looks.

Now that I’ve gotten on the internet, I’d rather be on my computer than doing just about anything!

It’s strange to think, because today, this quote sounds like something of addiction. But I’m willing to wager in the ’90s, when the internet was still new, The internet truly felt like somewhere that was enjoyable to spend hours on.

And in many cases it still is. Though I sometimes wonder if slow downloads and having your connection cut off by the phone would be more enjoyable than trudging through hours of crud.

The fact that the internet has made communities visible, ones that have been seen as taboo (and still might be) or otherwise nonsensical can strive without as much fear is astounding. Groups that a normal person would not bother to think of or necessarily tolerate like Otherkin and Plurals can have a community where they can gather. It doesn’t necessarily mean it’s going to be free from those wanting to disrupt it. But the internet also isn’t necessarily a place that everything can be changed on a whim on it at once. No one has to agree to the changes others put up. And as they said, the competition is always just a click away.

The boom in the exchange of ideas has lead to those with a fear of change to cower at everything new coming in. Everything seems alien to them because it’s not what they had when they were young. .

So overall, if you’re on the internet (which you probably are), keep being weird, keep being rad, keep being kind, and if you can, do look into making your own place on it. Whether that’s through picking up the essentials of HTML and CSS, or using an off the shelf solution like WordPress. And if you can’t do that, at the very least, branch yourself out on the internet. Because you never know what’s going to happen on this crazy world.

3D Printing

So these past couple of months have really been the prime time for me to finally get ahold of a lot of the things that I have always wanted. I managed to get a robotic vaccum, a nice set of speakers, and, of todays topic, a 3D printer.

The specific model I got isn’t particuarly exciting. It’s a random Chinese brand called Flashforge, and the model is the Flashforge Adventurer 5M. I got it at the beginning of February, and it arrived while I was sick. But I immediately pulled the strength that I had to haul it downstairs and set it up.

During the process, however, I accidentally managed to break the hinge on the display. So, after a few weeks of waiting for a replacement and fiddling with the display ribbon more to get a minimal amount of picture to set up a few things, the printer had been fully completed.

3D printers are interesting because they’re not exactly like regular printers. Unlike a regular printer, a 3D printer isn’t always successful, and it’s pretty rare that you would need to print out something again with a regular printer (barring running out of ink, paper jams, etc.)


But the entire 3D printing process is a very intricate one, one that involves precision and is codependent on everything being exactly right. Of course, if you maintain your printer well, you don’t have to worry as much about it messing up. But even then, not everything is going to come out exactly the way you want it. From what I know, a 3D printer prints by following the instructions it is given. These instructions simply command the printer to extrude filament that is hot enough to bend into shape, but cool enough that it quickly hardens so that another layer can be placed on top of it. Repeat this process hundreds of times, and eventually, you end up with a finished product.

As you might guess, just like building a house by laying all the bricks together, this is a delicate process, and a desired result isn’t always garunteed to happen on the first try. Problems I’ve already ran into include:

  • The filament gets jammed. Leading to nothing being printed as the nozzle moves around
  • The build is knocked over, causing the printer to try and lay filament on air (that falls to the plate, creating a mess
  • The printer extrudes too much filament at once. Resulting in a giant mess of filament stuck together at the nozzle
  • The build gets stuck to the nozzle. Leaving an unfinished build with a giant goopy substance attached to it.

Many of these problems can be fixed with some common troubleshooting I picked up in my A+ certification, and often just involve running some functions from the printer itself. But it definately doesn’t detract from the dissappointment of a failed model and all it’s wasted filament.

Why I wanted a 3D Printer

In addition to thinking that they were cool, I wanted to be able to print a lot of the things that I had been considering ordering off of Amazon, but didn’t really want to because it just didn’t seem worth it to spend 10-30$ over and over again on random things made out of plastic. Especially compared to making a one time purchase, plus multiple purchases of filament that could could make a lot of those things. Coupled with a little DIY, I could probably save a lot of money and make something that looks just as good, and more customized as well.

At least right now, I can do that, provided there’s an STL file available. In terms of creating models, I’m only a little bit good with Blender, and FreeCAD feels like it’s way out of my territory for something I could learn. A lot of the stuff behind it feels like something that an engineer would do (I wanted to be a software engineer at one point, but only because I thought that the title “engineer” sounded really nice and important).

Instead of going off that though. I think I would rather just infodump a lot of different information about the 3D printing pipeline so you can get an idea of it and how interesting it can be.

How the 3D printing process works

The printing process starts with the model you want to use. This would probably be in the format of a Stereolithography (STL ) file. There’s a few other formats as well, but this one is the most common and compatible out there.

Whether you made this model, or downloaded it off the internet, it’s first stop is the Slicer. For me, this one is OrcaSlicer, an open source application based on PrusaSlicer and BambuStudio.

The slicer is where the model is inspected, adjusted, checked if it’s physically possible to print as shown (and make adjustments, such as adding supports, if needed) and converted into instructions that the printer can use to adjust it’s settings and know what paths it should take for extruding filament. It converts the model into a programming language called G-code (which was first made in 1963, according to Wikipedia). It kind of looks like Assembly with code that looks like this:

;WIPE_START
G1 F3000
G1 X-21.395 Y-9.05
;WIPE_END
G1 X-14.638 Y-5.5 Z.6 F30000
G1 X.099 Y2.242 Z.6
G1 Z.2
G1 E.8 F2100

Basically, a lot of function names and commands that instruct specific adjustments the printer should make to it positioning and settings. The G1 command specifically tells the printer it should move the print head in a straight line.

There are other commands as well, each of which look something like the ones shown but with a different number and letter. And tell the printer a variety of things from:

  • Adjusting the fan speed, plate level, and extruder temperature
  • Starting and stopping certain components
  • Turning and moving the print head in different directions

Overall though, it’s not particuarly important to know this if all you want to do is print. But it’s good to know so you can understand whats happening, especially if you want to start playing with more advanced, open source firmware like Klipper that rely more on you setting up different macros and having much more fine grain control of your printer.

The point is, the slicer converts the model into G-code, and gives you a preview of what the model might look like when it’s printed. From there, you can either export the G-code file, place it onto a flash drive, insert it into the printer, and print it from the on screen menu. Or in my case, directly send the gcode file to the internal storage of the printer and automatically start printing.

If it helps any, you can think of the slicer as essentially being the Print Dialog Box you get whenever you press Control/Command + P. It has a ton of options, but chances are, depending on what you are doing, you may only need to press a single option, or none at all. It just so happens that printing a 3D model to a 3D printer isn’t something that one does everyday to justify including a slicer in every app.

What can I Print?

The thing that I have printed the most, with 3 of them. Is #3DBenchy, which is a popular little tugboat model that serves a calibration model of sorts. It’s symmetrical design with lots of overhangs and caves, coupled with it’s ability to be printed in place (sliced without changing any settings), it’s quick print time (a little over half an hour for me) as well as how cute it looks, makes it a popular model for testing if your printer is working correctly. In fact, as of right now, it’s actually one of the most downloaded models on Thingiverse, and recently entered the public domain as well.

Conclusion

These printers, while a little cumbersome to work with, and involving some things that are a little bit beyond my scope of knowledge. Are ultimately very interesting to work with. As long as you make sure to adjust your expectations, and know what it is that they can and cannot do. I would recommend getting one if you have the ability to.

Sonos Ecosystem Review

So, a while ago, there was this huge uproar over this company called Sonos, which is a manufacturer of wireless speakers, mainly because of a new version of their app that they deployed,

While the whole idea of this uproar was essentially to stay away from the speakers, it drew me in closer to the idea for some reason.

Basically, to talk about what makes Sonos speakers unique, there’s a little bit of background. In addition to being relatively high quality speakers (disclaimer: I’m not an audiophile, nor am I affiliated with Sonos), Sonos speakers are essentially within their own ecosystem that consists of speakers capable of having their audio playback grouped, ungrouped, and transferred between speakers. If this sounds familiar to what AirPlay (also available on speakers) or what select Google Cast devices can do, that’s probably because Sonos was the one to popularize the concepts behind it.

What separates Sonos from these devices however, is that unlike these services. Sonos speakers play audio directly from a streaming source. Which can be configured to either be a popular streaming service, or a local SMB library of your own music, and this is mostly the only thing they do. In addition, these speakers aren’t just individual endpoints to play to, multiple of the same speaker can be setup to work as if it was a stereo system. Essentially, these speakers let you create an entire home audio system without the need for excessive wiring or drilling through your home.

The app itself functions essentially as an audio remote, as well as the main place to manage the settings for your speakers and quickly setup new ones.

A screenshot of the Sonos App on an iPhone
The Sonos app is fairly simple. It’s primary purpose serving as a controller for your audio. With the media player at the bottom pulling out to reveal all the speakers on your network. Changing the output to another controlled device is as simple as tapping it from the list.

Despite needing to create an account for speaker setup and music streaming linking, the app has no cloud connection to it unlike most IoT apps. Meaning that you need to be connected to the same network as the speakers in order to control them and adjust their settings. However, other users can also download and join an existing system to their app, as well as start playing and controlling audio without needing to sign in or create an additional account (signing in is needed in order to adjust settings, however)

One of the major benefits Sonos puts off about their app is the fact audio can continue playing, even while there’s a phone call. Since the app doesn’t rely on any system media players and simply streams the audio directly to the speaker. I’ve personally found the app a lot nicer for playing streaming radio stations (something I can get through Apple Music as well as their own in-house, Ad supported, radio service. The latter I only really use for the Ad-free white noise and rain stations)

While I’ve never seen the original app, I can tell that the new app is likely built on some cross platform framework, and, while not as drastic as some users have complained (much of the work at Sonos had gone into damage control over the app as newer products were pushed aside, so it’s likely many of the greater issues had been worked out), I have had a few hiccups here and there with the mobile apps.

Other features

Music playback aside, the app lets you set up quite a few other features as well. For example, alarms can be setup within the app. Which basically just play a predetermined song at a certain time. There’s also sleep timers that can be setup as well, which basically work like most sleep timers on other platforms where the song fades out after a set amount of time.

The speakers themselves also work with Apple AirPlay, allowing Apple devices to cast audio to them, or asking a HomePod to play audio in a specific room. And can be setup to be controlled from Alexa, as well as Apple and Google Home. The former being a little more limited, and can only actually control playback from speakers currently using AirPlay.

One of the other features that can be setup is Trueplay, which as an equalizer that is setup either using an iphone held upside down and waved around the room, or by using a speakers built in microphones to measure the acoustics of a room, and adapt the audio to it. Whether or not you hear any enhancements will depend on the shape and size of your room, as well as the method of calibration you use. I personally found the former to sound a lot better.

The aforementioned microphones also can double up for speech recognition with a voice assistant. With Alexa being the main supported one through the Alexa built-in program. While I don’t actively use Alexa in my home. Alexa on the Sonos speaker is better suited as being a satellite to a home already using an Amazon Echo as opposed to being the main set of devices due to the inability for a Sonos speaker to function as a Matter controller for neither Wi-Fi nor Thread.

In addition to Alexa, Sonos also maintains their own in-house assistant as well called Sonos voice control. It mainly just functions as a companion to the main app, letting you perform some of the most common functions of it with our the need to actually open it. But other than that and a few other minor things, that’s mostly all it does.

API

In my opinion, one of the biggest saving graces to Sonos over the app fiasco was their well crafted local API. While they do have a separate cloud based API for more cloud based control (such as creating content providers the speakers can stream from, or 3rd party web apps), there’s a more local API as well relying on open standards like UPnP. While it’s not as documented as their cloud API’s, it’s still great that the speakers even have such a local API. While it won’t really save anyone if the company shuts its doors and all the cloud functions go offline. It still gives more ways for other apps to latch on.

One of these apps is on my list of favorites… Home Assistant. Sonos is actually a featured integration within HA. And the experience really shows as speakers are automatically discovered and added, and also update in real time. Including a couple of HACS cards that can even show you the exact playback time on a speaker.

Ironically, while writing this. The Sonos API documentation introduction has actually given me a better understanding of what Sonos products do compared to their actual marketing pages.

Verdicts

Personally, I would recommend Sonos to anyone with an Apple device, or anyone who is looking for something more streamlined for music compared to the bells and whistles of most smart speakers.

Compared to the competition of other smart speakers. The speakers can seem much more limited at a much higher cost. But overall, Sonos is really a product category that isn’t really in that of smart speakers Like Echo or HomePod. It’s not just a regular Bluetooth speaker (though some do have that functionality), but being an assistant isn’t it’s goal either.

I have found Sonos speakers much better for playing audio in part thanks to their well designed (if not well coded) app. And it feels like a device I actually want to play music to (and feel a twinge of guilt constantly that I’m not making the most of the one sitting on my desk aside from falling asleep to white noise). While AirPlay suggestions on iPhone make it a little easier to find a device to play music to, casting from any device can sometimes be frustrating (especially during the process of disconnecting). Considering the way you get most smart speakers to play music is by asking for a specific song, which can be wishing upon a star sometimes as you get the same song, but from a different album, or another song entirely. If all you are wanting to do at a given moment is just play something around the house, the app feels really well designed for that, even if it requires a few extra seconds of grabbing your phone. And the fact I can share that ability with my family, without the need for them to go through the process of signing up, was a really great design choice.

My only real concerns about Sonos have been a few class action lawsuits around the company and some general uncertainty around the amount of security updates products get, but other than that. They do work exceptionally well, and I wish I had the ability to acquire more of the speakers for my home.

Weird Cartoon Enthusiasts

It’s a pretty open secret at this point that many, but not all furries love inflation. Not with the economy (overused joke), but rather, body inflation. Seeing myself blown up like a balloon has become an oddly pleasant satisfaction, and something I can take a bit of comfort within both ways.

That said, there’s also a set of schticks that an even smaller number of furs are into. It’s a hard group to explain so I’ll provide a handful of examples from pop culture in an easy to web search format to give you an idea:

  • The reveal of Judge Doom from Roger Rabbit
  • The 2004 Pinball video game “Mario Pinball Land”
  • The various shapes of Tom from Tom and Jerry

Simply put, there’s a group of furries (among others) that enjoy various cartoon tropes along these lines with their own characters. Being flattened, stretched, and squished into various shapes and sizes. It’s kind of hard to put it explicitly under the category of Transformation because of how narrow it is, and the process doesn’t always tend to be the focus of it, so I’ll just simply refer to it as Toon (of course, one’s interests can always overlap. And there is a lot here, especially with inflation being a common cartoon gag as well).

In fact, at one point between 2016 – 2022, there was a group known as The Squish Gang (Archived Link), lead by Arcaxon the Arcanine-Corgi (A toon and transformation enthusiast), that served somewhat as a de facto hub for these kinds of enthusiasts. While the group wasn’t explicitly limited to furries, of the close to 1,000 members in its Discord server before its closure, it was run and occupied by them. And frequently engaged with users in production with raffles, and other cool projects made by its users and operators. And with plenty of cartoons being occupied by anthropomorphic animals, there was plenty of content to enjoy for all.

While it might be easy to dismiss the concept of toon as just another weird furry kink (for starters, it’s not explicitly limited to furries or furry characters), it’s also an overlapping group of users who also just enjoy cartoons in general, especially those from the golden age of cartoons. But that being said, it is still a kink to many (though not all), and Rule 34 still very much applies here, so you will find explicit content as well. But there’s a pretty healthy mix that makes it hard to call it a kink alone.

Some artists in the community I like are:

  • Will Mofield, a Gray Wolf with pants from the UK.
  • Wringed, as Taylor the Lemur.
  • Malletspace, as Malus the Hyena. With a striking blend that looks like a mix of Western Animation and Anime.
  • EccentricChimera, an Eastern Dragon in a top hat.
  • Matt Valkyrie the Lion, who draws balled up, buff looking critters.
  • UnknownBoy the Incineroar-Tiger, who I just recently got a commission from at the time of writing.

There are many more out there all around the world, and there are plenty of other folks as well that I like and follow closely as well.

Where I sit in the group

I, as I mentioned before, am a toon enthusiast as well. While I wish my OC was much more of a Toon character, similar to many of those that I follow. I’m also not sure what I could change either about it. And I don’t really know if I would either since other interests make my mark on my wolfself as well, a bit of a fantasy buff (I was inspired to name myself “Soulfire” because I was initially making a D&D character and thought the name sounded very Warriors inspired, despite never having read the books, only seeing a bit of playground roleplay when I was younger), there isn’t that much about the fire, it’s just cool. I just don’t think anyone has ever seen a fiery toon wolf before. And I’m not sure how I would execute that.

Of course, it also doesn’t really matter either. It’s who I am, so I can do what I want.

Therianthropy

I consider myself Therian. Perhaps it’s an odd way to cope with autism (if it’s actually coping). Even as everyone around me locally would likely think that I’m crazy (I’ve occasionally described to my mom how nice it would be to be a wolf, and she’s just responded “you are human”), and I only found out the concept of the thing a few years ago. It’s still something that clings onto me closely.

Maybe it’s just growing up lonely that drives one crazy, or maybe it’s the fact that the Autism spectrum puts you through a lot of challenges. But I just don’t feel like I fit in with a lot of others. I don’t think I have since I was in early elementary school.

Sometimes, especially during extreme bouts of emotion (good and bad), I begin to feel a bit like I have a tail. It doesn’t really wag or anything, but I can feel it at the back. Of course, I look at myself and I know that I’m human, but deep down, it would be truly nice to be something not human that better represents me. More body language, better abilities, but I’m not sure that’s what the whole concept of Therianthropy is about.

I look at myself in the mirror, and I feel like I’m simply contained within a vessel of some sort to my actual soul. There’s just a sort of feeling of disconnection between myself and my body during most moments. And there’s just this feeling inside of “is this me? Or is that me?”. Maybe it’s an identity crisis forming, maybe it’s a more pragmatic side, maybe it’s something more.

Is all of this something I’ve hypnotized myself into thinking? Is it truly a part of me? Is it just a phase? Am I just replacing the word “hand” with “paw” ironically? Is this just some kind of metaphor to me? All those questions are something I don’t think I’ll have the answer to…