Wednesday, April 15, 2026

A bit of Rygel in the DNLA

 I needed to share some pictures from PC to the TV and I remember setting up Rygel in the past to do it. It wasn't pretty because it only worked if I kept rygel running in the foreground. That was several distributions ago. Figured that things might have changed. 

I forgot how people on this side of the fence think. The "don't fix it unless it's broken" and "You need to scratch your own itch" mentality is still very much in force. A quick check confirmed that things were pretty much the same, not working by default. I prepared myself for an afternoon of shell commands and config file editing. But this time, I had something going for me. I had AI, specifically Google Gemini AI.

Started simple enough. It advised me to check whether everything is ok. Which included using the verbose mode using the command rygel -v.  I found that really just displayed the version. What this meant was that Gemini may not know about rygel specifically and was using similar programs as analogies to come up with answers (Lesson #1). So I did what most people in conversation would do, I told it that it was wrong. Chatting AI is a form of communication which means information has to flow both ways. Telling it was wrong made it think harder and gave me the correct parameter, rygel -g 5. This increased the log level but it still didn't show anything. Gemini told me to set the environment variable G_MESSAGES_DEBUG=all rygel, which forces output from the underlying libraries. That did the trick. I also had to change the user's rygel config file to point the URIs from a link to the actual full path. From there I was able to set it to run automatically at the user level using systemctl --user ...

 

Saturday, April 11, 2026

I propose a new Firefly timeline

To recap, I thought the new Firefly timeline could go something like this

  • The end of the TV series in Objects in Space
  • The capture of River by someone. Could be the Hands of Blue or a new villain.  
  • The rescue of River (now by the crew of Serenity) and the visit of the Operative to the Laboratory. 

The gory details about how this came about is here. There are a few plot points between the TV series and the movie to be addressed  

  1. Inara's departure
  2. Shepard Book's departure
  3. The role of and eventual demise of the Hands of Blue (maybe)

It would also create opportunities for new critical story lines or story points.   

  • The Hands of Blue would still be alive and still chasing River
  • There could more storylines involving Inara and Sheppard Book with the crew before their departures
  • And the eventual re-capture of River and her being brought to the Laboratory. 
  • The search by the crew of Serenity to find where River was brought to. 
  • The planning and execution of the rescue
  • The events leading to their heist on Lilac 

And between it all other storylines that create new stories, introduce other characters or expand minor characters from the movie to be more prominent. The Operative could be introduced sooner or another villain would take over from the Hands of Blue and capture River. Which would eventually lead to the movie Serenity.

Just thinking aloud. 

Was I thinking of writing fan fiction using this timeline? Nothing became out of just thinking.

Thursday, April 09, 2026

Firefly Returns and I have thoughts

The recent announcement of the Firefly revival as a animated series brought back a lot for me. First, I held off watching Firefly because I had missed it when it came out, due to life and work, and the high praises it was getting made me wary of a possible over-hype. When I finally was able to watch it, I just couldn't stop. And like so many before me, the feeling of loss, emptiness and confusion came when the series just ended. I followed the development of the movie and watched it on the big screen. The movie was great but I didn't agree with the framing of the events. I wasn't introduced yet to the graphic novels but later found them and read them. I still didn't agree with the timeline.

Sorry for the bad AI image

Looks like I wasn't the only one. The timeline of the movie and with the series didn't seem right. The Operative's visit to the Laboratory didn't align with the time elapsed that with his next encounter with the Firefly crew. From The Operative's perspective it seemed a short time elapsed. From the Firefly crew's perspectives it was some time ago because River Tam's rescue was from the Laboratory was before the start of the series. Which implied that The Operator's visit to the Laboratory was long after River Tam's rescue. 

That felt off to me. The Operative would have visited the Laboratory soon after River was rescued. Which means he was chasing them throughout the TV series but couldn't get close, which seemed unlikely. The Alliance High Command would have sent The Operative soon to investigate River's escape. All of this was explained in the "Those Left Behind" graphic novel series, where the Hands of Blue failed to re-capture River, the Alliance then decided it needed "a more personal touch" and The Operative had a cameo. Which meant that his visit to the Laboratory was long after River's escape. And that felt off. The Alliance would have dealt with the Laboratory severely for letting River escape. Which meant that there was nobody for the Operative to kill when he came much later. 

I propose a different timeline to stretch the time between the end of the series (Objects in Space) and the movie Serenity. This would create the space for more stories to be told. Alas, this necessitates the events in of the "Those Left Behind" graphic novel no longer be canon. I developed this alternative timeline after I saw Serenity because I didn't read the graphic novel until long after I saw Serenity.  

In this new timeline, the events of Serenity is not changed but rather the timing does. I propose that River Tam's rescue that we saw in the movie was done in between the end of the TV series and the movie. The fact that this could have been done by the Firefly crew is entirely possible given the events in the episode "Ariel", where they infiltrated a hospital. This would put the visit by The Operative to the Lab much later and that the time elapsed between that visit and him finding River much sooner. 

The announcement forced me to re-visit these thoughts so many years later. I'll need a bit of time to think more about this. Stay tunes

Monday, June 27, 2022

For VM junkies, the bridge to Docker goes through Bitnami

I have been spending time trying to wrap my head around Containers, mainly the Docker container. There are others that are up and coming, but since Docker is the most popular, understanding it will prepare you to understand the rest. It is not easy for me, coming from a VM background. Especially, understanding some of the ways that things work in containers versus how they work in a VM environment. Trying to model Dcoker from a VM perspective is the fastest way for me, but there are some major differences.

But I haven't stopped using VMs. In fact, a recent discovery of mine has shortened the distance from "I want to try this" to "I have it running to test things out". Bitnami makes and maintains VMs that can be downloaded to be used. Each VM provides a specific function, essentially, a dedicated system delivering a service. It is in the OVF format, making it fairly portable. However, I had problems importing it on an old ESXi because the OVF format has changed and there are 'extra files'.

I found a good web gateway to allow access from the Internet to a local server. It can be accessed over the web using a browser. Apache Guacamole is not a household name, but it offers access via SSH and Windows desktop through its web interface. Just click on a pre-defined link and it will bring you to the interface in the browser. 

I tried extracting the vmdk file (the disk file) and creating a VM around it. But the disk didn't like the way it was being booted and kept dumping me into EFI. A little reading made me aware that the Guacamole VM was running on Debian.. running GRUB, my mortal enemy. My clashes with it are here elsewhere, so I won't bore you. 

I then tried running it on a KVM host. Again, I unpacked the OVF, converted the VMDK to QCOW2 and created a VM around it. It worked straight out of the box. Bitnami VMs have a one-time startup sequence, and first time logging in does require a password change. But once the banners show how to connect to the Guacamole (or whatever service the VM is providing), it is intuitive to work with. Links and menu items can be spawned off into other tabs (showing a high degree of HTML compatibility). 

Making it work with SSH hosts is straightforward, and Windows Remote Desktop connections are not too difficult if you are the Admin. Windows requires some modifications to the server's Remote Desktop Connection server settings, but nothing that would cripple or make it more risky. 

I used to love SSL gateway devices before they were killed off by Java security updates and the lack of understanding by security professionals that always favoured VPNs. This gets it close to the connectivity level those devices used to provide.

Thursday, November 04, 2021

The Right Kind of Complex

I've always been interested in new technology. But I'm always worried about complexity for complexity sake. Now I know that some people push for this type of Technology simply to take advantage of it. By making it complex, they make it mysterious. When it's mysterious, it's magic. And when it's Magic, you can charge whatever you want. 

There are also those the want to be in an exclusive Club. And complex technology is a way to build a clubhouse where only those who understand are allowed in. Well, not really. Those who understand, but still don't need that criteria exclusivity still don't get in. And it seems that way with systemd. now before you go on and skip this because it's going to be another systemd rant, rest assured its not.
I want to talk about something that is appropriately complex, yet Rewards those who Brave its complexities. I I'm talking about docker. I've heard about it for so long in numerous technology podcasts. I heard the podcast where the inventors of Kubernetes begin to popularize it. Yet I found no occasion to use it. Fortunately, I can set up systems pretty fast and never needed before to look at it to improve my delivery cycle. I believe in forward planning, and leaving enough space to handle the unexpected.
However recently, I was pressed for time to deploy A system that used multiple nodes 2 process complex data. There was a front end, a node manager, a back-end component and the nodes themselves. The authors of the system very much encouraged deploying the system using docker. The system was very intriguing to me and it had components that I haven't worked with before. But there wasn't enough time.
There were the usual challenges of setting up a system, such as dealing with dependencies and outdated components. On top of that, the client requested to migrate the system from its original Linux distribution to a distribution that the organization is used to managing. After giving it a few tries (and failing), I decided to follow the strong suggestion by the authors and deployed it using Docker. The system was deployed in almost no time at all on the distribution that was favored by the client. I was taken aback at how simple the process was. I understand that the distribution really didn't change. It was more or less contained and sufficient enough to make the system work.
I decided to take a deep dive into Docker. I wanted to know enough to deploy other Solutions and to deploy Docker as a tool that I would regularly use. I found that my experience Building Systems allowed me to understand not only what was going on but also gave me an insight into the decisions made by the people who created the Docker images. I found that docker is complex but satisfyingly so given what the rewards of using Docker are. It is complex in the right way. It is complex because it needs to be complex. It isn't making something previously simple, complex for it's own sake. It rewards those who are willing to brave its complexities but still offer riches to those who are just intent on using the basic functions. It isn't Magic but it does seem so.
I have only begun my journey with Docker. The mysteries off building and managing my own images lie ahead. From where I stand, some parts does look complex and where isn't, the decisions and issues around those decisions are complex. I love learning about systems like this and passing on the knowledge to those coming up from behind me. I also like sharing the issues and working out with my clients the decision around those issues. That way, I make the magic less mysterious. It still is magic to them but sharing decision making process creates collaboration and acceptance.

Wednesday, May 19, 2021

A problem not big enough to solve?

In open source, 'scratching your itch' is a source of birth for many a project. It makes the assumption that someone who has a problem a.k.a. "itchy", has the resources (e.g. time, effort) to develop a solution (or scratch that itch). With so many open source solutions already built using this time-honored method, the issue nowadays is more of finding the project that "scratches your itch" than actually building one of your own. In fact, this has lead to a lot of dead projects, some of which were brilliant but lost in the shuffle. 

But it surprised me to discover an itch, a problem, that should have been so prevalent that someone should have done something about it. 

I was setting a new MSWindows10 environment at home and decided I needed to be able to access a Linux box remotely and do so while being able to run X11 applications remotely from the box. My go-to solution has been MobaXterm but since it is Freemium solution, I have always installed it with a caveat. I also didn't like it charging for was basically integration of existing open source solutions (in a way, at least). Okay, re-packaging. I remember seeing an alternative called mRemoteNG which sort of has the same features, is open source and expandable.

<a href="https://iconscout.com/illustrations/solved-the-problem" target="_blank">Solved the Problem Illustration</a> by <a href="https://iconscout.com/contributors/manypixels-gallery">Manypixels Gallery</a> on <a href="https://iconscout.com">Iconscout</a>
I downloaded a portable version and in no time was able to reach servers via SSH tunnels. It does rely on external applications, like Putty, to do the actual connection. But the presentation and configuration management features it provided was very much welcomed. Finally, I decided to use an X11 application on the server. MobaXterm has a built-in X11 server and using it was a no-brainer. But mRemoteNG has no documentaion for it. Even on-line, people did provide suggestions like using the VcXsvr X11 server for windows but no clear indication anybody has successful done so. Which is odd considering running an application from a linux box would be one of the things one would do after connecting to a Linux box. Or have we been disciplined enough to limit ourselves to command-line?


I used XMing back in the day but there are warnings that it doesn't run on Window10. There seemed to be a myriad of things to consider when setting up VcXsvr (e.g. display number, permission settings) before being able to run a single X11 application. Which is strange considering the X11 architecture was intended to allow complex X11 applications to run and consume resources on the server and just provide the UI to the user. 

So I chalk this up to an itch not itchy enough to solve. Nobody has done the work and shared the way to setup mRemoteNG with VcXsrvr. Someone has solved the connection part and the ability to run X11 on Windows. But no one has setup mRemoteNG together with VcXsvr. And that is a shame given that MobaXterm runs X11 apps straight from the ssh window. 

Maybe MobaXterm is the problem solved but nobody it willing to take the extra step and make an open source solution for it. Like I said, not itch enough.

Sunday, May 16, 2021

Pandemic and Mageia Madness

Fortunately, the pandemic has little negative impact for me. Working from remote, couped up in the house,endless remote meetings. Tell me something new. It's just more of it. And my exprience with work during the pandemic is opposite of most, I got even busier. New clients looking for a cheaper way of doing the same things. Companies looking at open source solutions mainly as a cost reduction option, suddenly okay with solutions that cost less even though it sticks out in their MsWindows environment. I'm not complaining but some days, I'm am at the edge of it.

 I've recently moved to the most recent version of Mageia. Well, forced to was the more likely. A distro upgrade broke and screwed up the loading of the kernel. I could try to fix it but decided to just start fresh. The data directories were in a different partition (best practice ever) and installing fresh would just mean I may had to deal with the configuration setting differences between the older KDE/Cinnamon with the most recent one (since it was reading my existing home folder). Long story short, it worked like a charm (other than my USB stick coming down with a case of bad blocks).

Then came the whole process of reconsidering the apps I really needed vs the apps I wanted (but almost never use). This is where my thoughts of the needs of the average user return. Does the average user use the apps I use or need? Do they use apps that I don't? This is important to me as a Linux advocate because if Linux doesn't meet the need of the average joe, then the adoption will always fall short. 

This was a slow start.

Recently Popular