For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
Yeah I’ve been using it for about a year and half or so on my main devices and it’s been wonderful. I’m likely going to down the list of supported providers from the gluetun docs and decide from there. Throwing my torrents and all that behind a vpn was the catalyst for signing up so I’ll continue to look for that support first and everything else is secondary.
I’m pretty sure it’s entirely disabled. Their announcement post says it’s being removed and doesn’t call out any exceptions.
I run my clients through a gluetun container with forwarding set up and ever since their announced end of support date (July I think?) I have had 0B uploaded for any of my trackers.
E: realized you may be asking about proton, oops
Wow this is great. I’ve been having trouble getting exit nodes working properly with these two. Sad that mullvad dropped port forwarding though so I’m not sure if I’ll stay with them.
I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.
I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.
It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.
Yeah, it’s kind of ridiculous. At this point my most starred git repos are all patches to get various extensions working on the current gnome release.
I’ve been looking to switch away but nothing I’ve used has had the it factor I want.
Homelab for me too. Started off with a repurposed gaming PC and exploded into multiple hosts, tons of drives, and an itch to keep expanding
Wow, I had no idea that there was a quote out there that aligns so well with my beliefs. I grew up in a semi religious household but was never forced to go to church. My parents encouraged me to go, not only to theirs but even go with friends that were different religions.
After going to various churches through some really vulnerable times I still don’t subscribe to any religion, but I also can’t bring myself to go full atheist.
Too bad that quote is way too long for a tattoo 🤣
You lose comment history and all that jazz too but it’s better than nothing. I’m not sure if devs plan to implement a way to do it but it’s one of the reasons I decided to roll my own instance. Nothing more frustrating than using someone else’s and losing access while they take days to get it back up.
Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.
My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.
I do a separate container for each service that requires a db. It’s pretty baked into my backup strategy at this point where the script I wrote references environment variables for dumps in a way that I don’t have to update it for every new service I deploy.
If the container name has -dbm on the end it’s MySQL, -dbp is postgres, and -dbs would be SQLite if it needed its own containers. The suffix triggers the appropriate backup command that pulls the user, password, and db name from environment variables in the container.
I’m not too concerned about system overhead, but I’m debating doing a single container for each db type just to do it, but I also like not having a single point of failure for all my services (I even run different VMs to keep stable services from being impacted by me testing random stuff out.)
Agreed. I haven’t come across any instances I care to participate in that have that enabled though.
This is ultimately why I decided to roll my own instance. I’m keeping my backup here though in case I mess something up, but full control is nice to have.
@[email protected] is correct, you can pass the values through that part of the UI. I used to do it that way and had Portainer watching my main branch to auto pull/deploy updates but recently moved away from it because I don’t deploy everything to 1 server and linking Portainer instances together was hit or miss for me.
Edit: I just deployed it like this (I hit deploy after taking the screenshot) and confirmed both inside the container that it sees everything as well as checking where Portainer drops the files on disk (it uses stack.env
)
I don’t know why I did all that, but do with it what you will lol
This looks great. Gonna give it a whirl this weekend
You can already do this. You can specify an env file or use the default .env
file.
The compose file would look like this:
environment:
PUBLIC_RADARR_API_KEY: ${PUBLIC_RADARR_API_KEY}
PUBLIC_RADARR_BASE_URL: ${PUBLIC_RADARR_BASE_URL}
PUBLIC_SONARR_API_KEY: ${PUBLIC_SONARR_API_KEY}
PUBLIC_SONARR_BASE_URL: ${PUBLIC_SONARR_BASE_URL}
PUBLIC_JELLYFIN_API_KEY: ${PUBLIC_JELLYFIN_API_KEY}
PUBLIC_JELLYFIN_URL: ${PUBLIC_JELLYFIN_URL}
And your .env
file would look like this:
PUBLIC_RADARR_API_KEY=yourapikeyhere
PUBLIC_RADARR_BASE_URL=http://127.0.0.1:7878
PUBLIC_SONARR_API_KEY=yourapikeyhere
PUBLIC_SONARR_BASE_URL=http://127.0.0.1:8989
PUBLIC_JELLYFIN_API_KEY=yourapikeyhere
PUBLIC_JELLYFIN_URL=http://127.0.0.1:8096
This is how I do all of my compose files and then I throw .env
in .gitignore
and throw it into a local forgejo instance.
How so? The three biggest things I attribute to Google are search, ads, and their mail/calendar/drive/docs suite. The only thing I see Proton doing is the last, which serves as an alternative to more than just Google.
(I ask this as someone that does not use Proton as primary for anything)
I pretty much always leave stuff seeding once I get it these days. Ever since I bumped the disk space on my NAS it made it a lot easier to leave stuff instead of jockeying for space on disk.
My higher ratio items are all old shits like You Got Served lmao
I’m taking a dump in my closet