Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Is there a way to import an exist map? #1

Open
meenie opened this issue Feb 18, 2021 · 21 comments
Open

Question: Is there a way to import an exist map? #1

meenie opened this issue Feb 18, 2021 · 21 comments

Comments

@meenie
Copy link

meenie commented Feb 18, 2021

I got the server up and running, but I'm wondering if there is a way to import an existing map?

@rileydakota
Copy link
Owner

rileydakota commented Feb 18, 2021

@meenie thanks for reaching out. Not directly by the template - but you could Connect to the created EFS Volume by the template from an EC2 instance and upload your map file in the location https://github.com/lloesche/valheim-server-docker expects it to be (/config/worlds/ IIRC) and copy your current world files over. I can see about automating this into the solution via a script. https://docs.aws.amazon.com/efs/latest/ug/wt1-test.html

Another option would be to quickly stand up an AWS Transfer Family server and use FTP/SFTP from your machine to transfer your current map file to the volume. Apparently this feature was just added https://aws.amazon.com/blogs/aws/new-aws-transfer-family-support-for-amazon-elastic-file-system/

I'd be happy to mock up a script to do this via the AWS CLI (or if you wish to do so - would be an awesome PR - maybe we include the script with instructions in the README for those who have this same scenario)

@meenie
Copy link
Author

meenie commented Feb 18, 2021

I tried last night to set up the AWS Transfer Family server and got to the point where it logged the user in but it didn't have permissions to see the files. I even gave the user an Admin role of all permissions and still couldn't haha.

I was going to go the EC2 route. But if you know how to get it done relatively easily and provide instructions, that would be amazing! I'm not an AWS expert.

Thanks for your quick reply! :D

@rileydakota
Copy link
Owner

My gut wants to go the Transfer Family route if I can figure out the file permissions issue - would avoid having to launch and connect to an EC2, and writing a script to create a transfer family server, connect via FTP, copy the map files and clean it all up would be rather easy. I should have some time to try and troubleshoot this tonight or tomorrow. Only thing I can suggest in the mean time is checking to see if transfer family created an EFS "Access Point" on the volume, and if it is enforcing permissions (https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html). Also curious if the task needs to be stopped while you are attempting to copy as well. Will keep you posted.

EDIT - I also just realized it might be worth messing with the WORLDS_FILE_PERMISSIONS environment variable for the container, as it is set to 655 - that could be stopping you from messing with the file (although my unix file system permissions knowledge is rusty)

@rileydakota
Copy link
Owner

Oh - and if you can share the exact steps and errors you took - that would be great as well

@sdredsoxfan
Copy link

I've been fighting with EC2 and AWS for a while trying to get a server hosted for Valheim, but assuming I get this working, I'd also love any information about porting your own world maps to the File Storage System so I can resume an already existing world.

Any idea how much traffic this would be able to serve? Can 10 players play concurrently and smoothly? I'd assume so.

@sdredsoxfan
Copy link

So I've gotten to a similar point with Transfer Family. I'm able to connect to the Transfer Family, but I'm then being rejected with errors that say, "Is this an SFTP server?"

I've opened port 22 for TCP connections in the attached security group to EFS, but that didn't work. I'm not sure what I'm missing, but I can take a look at the Access Points to see if that would do it.

I also found this documentation here which supplies a cloud formation template that would allow you to create multiple resources to allow for SFTP to the EFS, but I'm having trouble trying to figure out a way to tweak or port that Template over to the existing one supplied here. I'm also a little unclear how I would replicate these changes through the console to create a solution. New to AWS here myself, so I'm learning quite a bit and a little slow, but I'm hoping I can get something up soon!!!

Attached is the CloudFormation template as a text file. It's written in YAML though.
aws-transfer-custom-idp-basic-apig.txt

@sdredsoxfan
Copy link

Update: I've gotten a little further here and was able to receive a new error:

Permission denied.
Error code: 3
Error message from server (US-ASCII): Unable to lookup path: permission denied for /fs-11c5b715

I think @rileydakota may be correct here with suspecting it to be a permissions issue. Things I tweaked to get to this state was based on this issue that I found hidden across the internet: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-cannot-initialize-sftp-error/

It took a surprisingly long time to find this.

@sdredsoxfan
Copy link

PS. If I get this working, I'll try to prepare a little writeup for the work I did on the console. It'd be awesome if we could add a feature to port this over via CloudFormation to be deployed by "npx cdk deploy". I'm not very well versed here, but would be great to at least provide this flexibility for someone to do this.

@rileydakota
Copy link
Owner

rileydakota commented Feb 21, 2021

@sdredsoxfan about to start hacking at this myself as well- as getting transfer family working would be a much more elegant solution then EC2. Would love a README or code PR however.

Re: Can the server support X players smoothly: FWIW I have about 6 players at peak playing, and these are the performance metrics:
image

The only performance issues I've had appear to be related more to not restarting the server regularly enough (the docker image by default only restarts when the server software has an update IIRC). None appear related to AWS or ECS.

Re: Adding a feature to do this for someone in the template: Great minds think alike :) - our best method of doing this would probably be by adding a Cloudformation Custom Resource that uses a lambda to create the transfer family, orchestrate the file transfer and then remove the transfer family server for us (Transfer Family is EXPENSIVE). CDK has capaiblity that can upload a specified file to S3 as well (perhaps we can use this?)

@rileydakota
Copy link
Owner

@sdredsoxfan I appear to have gotten it working - I can ls into the worlds folder, download them, and I tested using put to overwrite one of the backup files:

image

I think the biggest "Gotchas" here will be:

Setting the UID and GID to 0 to override any permissions issues:
image

And making sure you assign an IAM role that has permissions to access EFS. I am skipping some security related things here - but this is obviously a temporary fix to allow people to upload their map file. If you can test the actual put command of the map file itself - that would be fantastic!

@sdredsoxfan
Copy link

I think that's exactly what I'm missing right now is the IAM role that has permissions to access EFS. I was about to get that working and try it out, but took a little break to think about it. I'm going to give that a shot though and I'm assuming this will work.

RE this comment: I don't know that it makes sense to go as far as to automatically orchestrate the file xfer via this script, however setting up the Transfer Family resource correctly and allocating an appropriate IAM role, configuring a User, and allowing the client to update password information for setting up SFTP would be all that's needed. I'm sure there's probably a way to automatically generate an ssh rsa key-pair, but that'll also be a necessity.

Anyways, my point is that we can leave it up to the client to then just SFTP in manually to upload. At least that might be a good initial step.

I saw that we can upload to S3. What does that get us though? Our worlds are all stored in EFS, so would you xfer from S3 to EFS?

@sdredsoxfan
Copy link

Would you mind sharing your IAM role permissions for EFS? Trying to dig through and find it somewhere, but would, selfishly, be much quicker to just use your example! 😂

@sdredsoxfan
Copy link

Nevermind that! I figured it out. 😂 Thanks for being a rubber ducky, github comment section!

@rileydakota
Copy link
Owner

@sdredsoxfan - I am good with that approach as well - my original thought of using S3 was to fully handle the file upload via custom resource by:
CDK uploads user specified map file
Lambda Function retrieves map file from S3
Lambda function creates Xfer Family and user - orchestrates upload
Cleanup

To your point, this would be a good bit of work. I am good with just providing an option to enable/disable transfer family and instructions and how to import it manually.

IAM Role Policy - this SHOULD work:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:DescribeMountTargets"
            ],
            "Resource": "*"
        }
    ]
}

@sdredsoxfan
Copy link

Yeah this is one of those decisions where it's like, "How many people will actually want to upload a custom map compared to how many people just want to use this to host a server?"

Your approach MAY be better because if the user decides to NOT supply a world, then there's no reason to spin up any of the extra resources. Conversely, by assuming that we're going to allow users to manually upload worlds, then we've spun up resources that may go unused and won't be cleaned up without manual intervention.

If there's something that I may be able to help with, let me know! But for right now, I've gotten my world copied over to the File System and am going to try re-running npx cdk deploy after tweaking my world name! CROSS YOUR FINGERS!

@rileydakota
Copy link
Owner

rileydakota commented Feb 21, 2021

@sdredsoxfan we could wrap our current config in a class, and have an interface where the user defines server params as such:

const serverParams = {
   enableXferFtp: true,
   env: {
      SERVER_NAME: 'myServer',
      ...
   }
}

const server = new valheimServer(this, 'valheimServer', serverParams)

in the class we could have logic to handle this:

if (props.enableXferFtp){
   const ftp = new transfer.CfnServer(this, 'transferServer', transferServerProps)
//OTHER CODE HERE

}

It would then be on the user to run our template, follow instructions, then change enableXferFtp to false and rerun npx cdk deploy to deprovision it.

If you are interested - this would be an AWESOME contribution. Otherwise a set of written instructions or CLI commands to standup/disable it to the readme would still be fantastic! No pressure either way.

Also FWIW I originally avoided going the CDK construct route as I figured the majority of users would just want a working template - but we could always develop it as a class in the template itself.

@sdredsoxfan
Copy link

Yeah I can try and make these changes for a contrib. I've got raid in about an hour though and would like to ensure I have more time to dedicate to it, so if you're okay with a little time delay, I can try and get to it later this evening or tomorrow!

@rileydakota
Copy link
Owner

rileydakota commented Feb 21, 2021

no rush at all - excited to see it!

@DoubleL73
Copy link

Hello, I tried a more manual approach but did not succeed yet.
I modified https://github.com/lloesche/valheim-server-docker 's docker file to copy my map files, and then modified

image: ecs.ContainerImage.fromRegistry("lloesche/valheim-server"),
with image: ecs.ContainerImage.fromEcrRepository(myDockerImage.repository, 'latest'), and added const myDockerImage = new ecrassets.DockerImageAsset(this,'valheim-server-custom',{directory:'my/docker/file/dir',});

I previously succeed to run the project without any changes and was able to play on a new server. So I removed all aws services for a fresh start, but npx cdk deploy is stuck at step 19/21 for a pretty long time, so I quit.

I'm not an aws/docker wizzard, am I missing something? Is this method doable?

@rileydakota
Copy link
Owner

Hi @DoubleL73 - yeah this approach should be feasible. When you are running npx cdk deploy - does the Fargate task ever make it to "running" status? My initial thoughts would be:

  1. The Fargate Task failing to pull down your custom image successfully
    or
  2. Something up with the container causing it to never fully stabilize.

The detail about the task would be a great first troubleshooting step.

@DoubleL73
Copy link

DoubleL73 commented Mar 1, 2021

@rileydakota Thanks for your reply!
It's seems that you are right: The Fargate Task is failing to pull down my custom image successfully.
Service ValheimServerAwsCdkStack events shows it is continuously starting tasks, and those tasks are stopping after a while because

CannotPullContainerError: inspect image has been retried 1 time(s): failed to resolve ref "my-key.dkr.ecr.my-zone.amazonaws.com/aws-cdk/assets:latest": my-key.dkr.ecr.my-zone.amazonaws.com/aws-cdk/assets:latest: not found

I tried to use your original script but this time I changed

image: ecs.ContainerImage.fromRegistry("lloesche/valheim-server"),
with image: ecs.ContainerImage.fromAsset('my/docker/file/dir'),.
It kinda worked : deployment went well, could joined server by IP, but the game started on a new map at day 1, not on my world I put in docker image.

WORLD_NAME is same as my world.db/fwl without the extension. Am I missing something? Maybe I should ask here https://github.com/lloesche/valheim-server-docker since the deployment went well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants