Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Katana Docs #54

Merged
merged 28 commits into from
Aug 31, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ katana
test.go
vendor/*
*.log
docs/themes/*
!docs/content/Katana
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "docs/themes/hugo-geekdoc"]
path = docs/themes/hugo-geekdoc
url = https://github.com/thegeeklab/hugo-geekdoc.git
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,12 @@ set-env:
go build && \
./katana

setup-docs:
git submodule update --init --recursive
cp ./docs/config.sample.toml ./docs/config.toml
npm install --prefix ./docs/themes/hugo-geekdoc
npm run build --prefix ./docs/themes/hugo-geekdoc

# Prints help message
help:
@echo "KATANA"
Expand Down
Empty file added docs/.hugo_build.lock
Empty file.
6 changes: 6 additions & 0 deletions docs/archetypes/default.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

25 changes: 25 additions & 0 deletions docs/config.sample.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
baseURL = "http://localhost"
title = "Geekdocs"
theme = "hugo-geekdoc"

pluralizeListTitles = false

# Geekdoc required configuration
pygmentsUseClasses = true
pygmentsCodeFences = true
disablePathToLower = true

# Required if you want to render robots.txt template
enableRobotsTXT = true

# Needed for mermaid shortcodes
[markup]
[markup.goldmark.renderer]
# Needed for mermaid shortcode
unsafe = true
[markup.tableOfContents]
startLevel = 1
endLevel = 9

[taxonomies]
tag = "tags"
19 changes: 19 additions & 0 deletions docs/content/Architecture/design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
title: "Design"
resources:
- name: arch
src: "../../resources/_gen/images/arch.svg"
title: Architecture
---

Katana uses a namespace-per-team model. Each team is assigned a namespace, and all of the team's resources are deployed into that namespace. This model allows Katana to provide a secure environment for each team, while also allowing teams to interact with each other.

Every team starts with a team pod. The team pod is a pod that is deployed into the team's namespace. The team pod is used to provide the team with a persistent environment. The team pod is also used to provide the team with a persistent storage volume. The team pod is deployed using a [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), which ensures that the pod is always deployed to the same node. This ensures that the team's persistent storage volume is always available to the team.

The teams are give SSH access to the team pod. Each teams is given a user-password pair that can be used to SSH into the team pod. The team pod is given a public IP address, which can be used to SSH into the team pod from outside of the cluster.

Challenges are pods that are deployed into the team's namespace. On patching, the pod is redeployed.

Katana has its own namespace. This namespace is used to deploy Katana components. These components include flag handler service, challenge checker service, logging service, git server. We will discuss these components in more detail in the next section.

{{< img name="arch" size="large" lazy=false >}} // Not working
9 changes: 9 additions & 0 deletions docs/content/ChallengeChecker/main.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Challenge Checker"
---

# WIP
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@h4shk4t is this finished?


The challenge checker will be responsible for running the checks against the challenges. The challenge checker will be deployed as a Kubernetes CronJob/Service. The CronJob/Service will run at every tick and will check the status of the challenges. The challenge checker will be responsible for checking the status of the challenges and updating the status of the challenges in the database.

It has been decided to use a Pod in the master namespace to routinely run a knative service which would start a new instantaneous pod and run the checks. The instantaneous pod will return a success or a failure response for its respective request. There will be (no. of challenges x no. of teams) instantaneous pods possible at any given time.
52 changes: 52 additions & 0 deletions docs/content/DatabaseSetup/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: "Database"
---

# Introduction

Katana uses a mongoDB to store its data. The database is used to store information like challenge data, user data, and more. This page will walk you through the process of setting up a mongoDB database. The database will run in the master namespace.

# Setup

Simply setup the database by changing the config variables in `config.toml` to the following:

```toml
[mongo]
username = "[YOUR USERNAME HERE]"
password = "[YOUR PASSWORD HERE]"
port = "32000"
mongosh_version = "1.6.1"
```

Default yaml files are written in the `manifests` folder for delpoying mongoDB pods in the master namespace during infraset. To deploy the database, you first need to set up the infrastructure with the help of the `/api/v2/admin/infraSet` endpoint. Then you need to hit the `/api/v2/admin/db` endpoint to setup the database.

# Go Code For Database Setup

The following code is responsible for setting up the database.

In `connection.go`:

```Golang
var client, err = mongo.Connect(ctx, options.Client().ApplyURI("mongodb://"+configs.MongoConfig.Username+":"+configs.MongoConfig.Password+"@"+configs.ServicesConfig.ChallengeDeployer.Host+":"+configs.MongoConfig.Port+"/?directConnection=true&appName=mongosh+"+configs.MongoConfig.Version))
```

In `db.go`:

```Golang
func DB(c *fiber.Ctx) error {
client, err := utils.GetKubeClient()
if err != nil {
log.Println(err)
}
service, err := client.CoreV1().Services("default").Get(context.TODO(), "mongo-nodeport-svc", metav1.GetOptions{})
if err != nil {
log.Println(err)
}

// Print the IP address of the service
fmt.Println(service.Spec.ClusterIP)
mongo.Init()

return c.SendString("Database setup completed")
}
```
11 changes: 11 additions & 0 deletions docs/content/Katana/getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: "Getting Started"
---

Katana is a cutting-edge Attack-Defense Capture the Flag (CTF) platform built. CTF is a popular cybersecurity competition where participants compete against each other by solving complex challenges related to cybersecurity. Katana makes it possible for users to easily host and play CTF competitions in a secure and scalable environment.

One of the most notable features of Katana is its use of [Kubernetes](https://kubernetes.io/). Kubernetes is an open-source container orchestration system that simplifies the deployment, scaling, and management of containerized applications. By leveraging Kubernetes, Katana is able to provide a highly scalable and resilient platform that can handle a large number of users and high volumes of traffic.

The platform also provides a dashboard for managing competitions. The dashboard allows users to manage the challenges, teams, and scoring for the competition. It also provides real-time statistics and monitoring of the competition, enabling the organizers to make adjustments as needed.

Katana's use of Kubernetes also ensures that the platform is highly secure. Kubernetes provides a variety of security features that are designed to protect against attacks such as denial-of-service (DoS) attacks and data breaches. These features include role-based access control (RBAC), network policies, and pod security policies. Additionally, Kubernetes provides automated security updates, ensuring that the platform is always up-to-date with the latest security patches.
5 changes: 5 additions & 0 deletions docs/content/Katana/setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: "Setup"
---

Anshul
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Perseus-Jackson477 can you pick this up? else ask someone who can finish this.

28 changes: 28 additions & 0 deletions docs/content/KatanaD/main.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
title: "KatanaD"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update and finish this.

---

Each team under it's own namespace has a master pod. We have names the general structure of a master pod as KatanaD. Each team will have access to it's own master pod which will contain important directories

1. Challenge Files
2. Another item //Update these after diagram.
3. Another item //Update these after diagram.
4. And another item. //Update these after diagram.

//UPDATE DIAGRAM OF A MASTER PODS WITH THINGS INSIDE IT.
![Image Not Found](/team-pods-architecture.png)

## Structure

The DockerFile can be accessed from [here.](https://github.com/sdslabs/katanad/blob/mainpod/Dockerfile)

## General Flow

- Challenge gets copied to master pod (VANSH)
- [Listening Service unzips the challenge in the master pod (VANSH) ](../KatanaD/v1.md)

Now let's look how a challenge pod gets deployed from the master pod.

- [Challenge-Containers v1](/KatanaD/v1/)

- [Challenge-Containers v2](/KatanaD/v2)
107 changes: 107 additions & 0 deletions docs/content/KatanaD/v1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
title: "V1 - Challenge Containers"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this finished?

---

Initially we were planning to deploy images from within the cluster. There are 2 parts to this.

1. Image Building
2. Applying Deployments

## Image Building

Kubernetes does not support docker inside their containers, we have looked at alternatives approaches. Click on the link to see the details and also why they were rejected.

1. Mounting docker onto the container

- Rejected ? : By giving root access to docker daemon, a team may delete the image or fiddle with the resources of other teams.

[WIP : Explore and write for each sub-methods]

2. [Docker out of docker]()
3. [Docker in Docker]()
4. [Kaniko]()
5. [Moby]()

## Applying Deployments

### InCluster Deployment

1. First we make a RBAC role to allow access to namespace for creating incluser deployments. Here we have used the default namespace for testing purposes. Write and apply an RBAC accordingly as per your infra and flow.

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: deployments-and-deployements-scale
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: deployments-and-deployements-scale-rb
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: deployments-and-deployements-scale
apiGroup: ""
```

2. Next we use the kubernetes client api to apply deployment.yaml stored in katanad which runs the deployment. The image should be inside the minikube docker daemon which is the dependency on the first step.

A Basic Challenge Deployment yaml. The challenge's image should be inside the minikube docker daemon and image pull policy is never in order to avoid dockerhub pull.

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: challenge-deployment
labels:
app: challenge
spec:
replicas: 1
selector:
matchLabels:
app: challenge
template:
metadata:
labels:
app: challenge
spec:
containers:
- name: challenge
image: challenge:latest
imagePullPolicy: Never
ports:
- containerPort: 80
```

Python file to apply deployments.

```py

from os import path
import yaml
from kubernetes import client, config

config.load_incluster_config()

//print(path.dirname(__file__))

with open(path.join(path.dirname(__file__), "deployment.yaml")) as f:
dep = yaml.safe_load(f)
k8s_apps_v1 = client.AppsV1Api()
resp = k8s_apps_v1.create_namespaced_deployment(
body=dep, namespace="default")
print("Deployment created. status='%s'" % resp.metadata.name)
```

### Outcluster deployments

Outcluster deployments are pretty forward. You can run a script using the kubectl commands or you can use the client api's and write similar files as above and change `config.load_incluster_config()` to `config.load_kube_config()`.
11 changes: 11 additions & 0 deletions docs/content/KatanaD/v2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: "V2 - Challenge Containers"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to finish

---

# Introduction

In V2, we are planning to see the git diff's for the source code for challenge after patch and then send it outside the cluster , compile the code , build and deploy the image in docker , load the image in minikube docker daemon and then apply the deployment.yaml files

# [WIP]

Add image here
49 changes: 49 additions & 0 deletions docs/content/Patching/main.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: "Patching Service"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to finish this.

---


## Introduction



The patching service of katana makes use of a locally run git service called Gogs running in the admin namespace. The decision to not use github was to decrease latency to pull stuff over internet. Following are the steps



{{< toc >}}

## Initialisation

- ### Infraset
During this time we estbalish MySQL, MongoDB and Gogs pods are created.

- ### Database Setup
We establish connection with Mongo and MySQL. A mongoDB admin is established with team credentials.

- ### GitServer
We hit the gogsIP/install which creates the gogs tables in the MySQL pod. If using a non-cloud based cluster (like minikube), establish a connection with LoadBalancer [```minikube tunnel```]. As of now you have to hit the Database setup one more time after GitServer to establish the admin user in Git

- ### Create Teams
This creates the namespaces, the master pod for each pod. It also creates the user in Gogs database for each team.

## Setting up a challenge

Whenever a challenge is setup, the broadcasting service is ivoked which creates a private repository for that challenge for each team along with applying a yaml file for that challenge in each team's namespace. Now the said broadcast service sends a zip file of the challenge to each and every pod, where it's unzipped and initialised dynamically w.r.t. each and every team repository. We pulll once to make sure the histories of both the local copy and the repository is in sync.

## Patch Challenge
in /usr/bin/ we have a patch_challenge bash file which essentially runs a simple git add to git push command. It takes the commit message as it's argument so teams can have their own commit messages so they find it easier to point if they wish to backtrack to a previous patch.
As soon as a push is made, a github webhook sends a post request upon which the updates are pulled, an image is created and pushed into the K8s registry. The challenge pod of that particular team is killed, and when it restarts, it pulls the latest image from the registry.

## Inside/Out

To reduce latency, the following architectural decisions were taken. MySQL server and Gogs Service will be running within the K8s cluster. Pulling the changes and creation of image happens outside the cluster.

## Image within

We "briefly" considered the concept of maybe creation of images within the pod and pushing it into the registry from within the pod itself. This would lead to the changes never having to leave the cluster, thus increasing speed by a lot. However, this was scrapped as to make an image you have to either:

- Create a DIND (Docker in Docker), which would have secuirity impact.
- Using Kaniko (A Go based library which allows image creation within the pod). However pushing it into the registry recquired to provide sudo privilege to team pods, which essentially left the entire cluster vulnerable to attacks.

Thus the final decision was to run the patch service in the aforementioned manner to provide the team a seamlesss A&D CTF exeperience and at the same time make it easier for the admin to host one
Loading