We have had a couple of people have this exception in Kubernetes because the nodes can’t talk to each other. Is your setup configured so that each node can talk to each other node?
I would expect that the configuration would need to use the k8s names rather than IP addresses for all of the nodes. This is configurable in the configuration file or through env variables. More here: https://fusionauth.io/docs/v1/tech/reference/configuration/ . Look for the fusionauth-app.url setting.
It looks like you switched to development mode from production mode, possibly to run some migrations automatically.
This message means that FusionAuth sees an entry in its node records that would lead to an incompatible configuration. What we really want to avoid is a cluster of FusionAuth instances with some in development and others in production, as that might lead to some confusing behavior.
If you are running a single node, you can try restarting FusionAuth a few times and that node record should be reset. You can also switch the node back to production and make sure it shuts down cleanly, which should remove the node record.
The short answer is that these events are from when the user was created or first registered for an application.
When a user is first created, or registered for an application we create a login event because we generate a JWT and optionally a Refresh Token for the user.
In these cases, we do not have an IP address to record in the login event.
We have discussed adding the IP address from the API request, but this is likely a back end system or internal service and the IP address would not represent the location of the end user, and so would likely not be of great use.