Question about dev/staging and databases
-
We just got going with Fusionauth. I love Fusionauth btw - Ive been testing out a few auth solutions, particularly Golang (our stack) and although I think Fusionauth is a bit of a resource hog, I think on the balance of things, its the best. I am running it in a Kubernetes pod. We are busy implementing the introspection through an NGINX ingress/caching and this is fantastic. I learnt that I needed to reserve a fair amount request memory/cpu and set affinity to one node to keep the performance up. Otherwise I saw very large IO, eating up all the CPU/memory on the cluster and it seemed to have happened in cycles. This is just a dev/staging environment. Starting small with Fusionauth is a bit tough. I hope thats not going to be a cost problem when scaling upwards, not sure how many users can be hosted on say 1GB memory.
Last night all our micro services stopped working and we worked out that Fusionauth was eating up all the database connections - this is a managed database using Postgresql. We have tried a few things like using the connection pooling using the session method. We ended up having to up the size of the postgresql database to accept all the connections. There are only 3 of us using it...
I started looking into maybe using Mysql as an alternative as Digitalocean allows 75 connections on the smallest instance. The problem is they have enforced that all tables must have a primary key so when I ran the Mysql scripts to load in the tables, some failed because they didn't have primary keys.
I am not sure if maybe one performs better than the other or how the number of connections differ.
I'd prefer to use a managed database as it will have all the back up plans etc.
What is your thoughts here? Is there a way to manage the number of connections or what is best practice? Is it worth looking into Mysql but then how would we make that work on Digitalocean?
Moved over from https://github.com/FusionAuth/fusionauth-issues/issues/733
-
Hiya,
The system requirements for FusionAuth are documented here: https://fusionauth.io/docs/v1/tech/installation-guide/system-requirements and may be worth reviewing.
FYI, Kubernetes isn't officially supported, but plenty of folks are running FusionAuth in that environment.
-
Thanks I had a read, it doesnt mention how many database connections that it utilises (broadly speaking). Ive been stumped with that. It opens up a fair amount of connections.
Also I cant use hosted version of Mysql in digital ocean... All the tables require primary keys. Something to do with replication.
-
Hmmm.
Here's an issue tracking digitalocean database issues--some managed databases don't work right now: https://github.com/FusionAuth/fusionauth-issues/issues/95
The number of open connections should be around 10. I believe that is per fusionauth instance.
What were the specs you were seeing the issues with?
- what version of fusionauth
- how many pods running it
- what version of postgres
- what size were the pods (in terms of memory and CPU)
- what are the replication steps to trigger the negative performance impacts
We've seen FusionAuth (the application) run in 64M of RAM. You can specify the maximum amount of memory used in the configuration file or via environment args. More here: https://fusionauth.io/docs/v1/tech/reference/configuration
Note that if you don't need advanced search functionality, you can use the database search engine and avoid running elasticsearch: https://fusionauth.io/docs/v1/tech/tutorials/switch-search-engines talks about how to switch between them. That may eliminate some of the memory pressure if you were running elastic.