18 Commits

Author SHA1 Message Date
0e133ae6db fix(path): fixing the default compose file check
All checks were successful
Go-Tests / tests (pull_request) Successful in 1m54s
Go-Tests / sonar (pull_request) Successful in 1m15s
2026-03-15 21:50:26 +01:00
5d839035b9 feat(depends): add suffix on RBAC and SA 2026-03-15 10:15:47 +01:00
7e1bbdc9b3 feat(quality): remove unused modules 2026-03-15 09:43:58 +01:00
f175416ac2 feat(quality): fix duplicates and modernize 2026-03-15 09:43:16 +01:00
613baaf229 feat(depends): add RBAC 2026-03-15 08:55:24 +01:00
8fc9cb31c4 feat(depends): Check call to kubernetes API 2026-03-08 23:50:29 +01:00
78b5af747e feat(depends): Use kubernetes API for depends_on management
We were using netcat to port to check if a service is up, but actually
we can do like Docker / Podman compose and check the status. For now,
I'm using the endpoint status, but maybe we can just check if the object
is "up".
2026-03-08 23:47:13 +01:00
269717eb1c fix(err): No port with depends_on
All checks were successful
Go-Tests / tests (push) Successful in 3m25s
Go-Tests / sonar (push) Successful in 42s
As #182 were not clear, the `depends_on` from a compose file needs, at
this time, to check the port of the dependent service. If the port is
not declared (ports or with label), we need to "fail", not to "warn.

Fixes #182
2026-03-08 22:52:24 +01:00
61896baad8 feat(logger) Change others logs 2026-03-08 22:46:26 +01:00
feff997aba feat(logger): Add a Fatal logger 2026-03-08 22:38:03 +01:00
89e331069e Merge pull request 'Typo: replace "skpped" with "skipped"' (#183) from macier-pro/katenary:fix-typos into master
All checks were successful
Go-Tests / tests (push) Successful in 2m33s
Go-Tests / sonar (push) Successful in 42s
Reviewed-on: #183
2026-01-28 20:34:08 +00:00
88ce6d4579 Typo: Replace "skpped" with "skipped"
Some checks failed
Go-Tests / tests (pull_request) Successful in 1m39s
Go-Tests / sonar (pull_request) Failing after 35s
2026-01-19 09:11:45 +00:00
3e80221641 Merge pull request 'fix: convent error' (#178) from kanrin/katenary:master into master
All checks were successful
Go-Tests / tests (push) Successful in 1m39s
Go-Tests / sonar (push) Successful in 37s
Reviewed-on: #178
2025-12-07 10:08:33 +00:00
Orz
990eda74eb fix: convent error
Some checks failed
Go-Tests / tests (pull_request) Successful in 2m54s
Go-Tests / sonar (pull_request) Failing after 24s
2025-10-28 18:42:06 +08:00
7230081401 fix(install): bad release substitution 2025-10-20 00:02:53 +00:00
f0fc694d50 Fix typo
not important but...
2025-10-18 13:22:36 +00:00
d92cc8a01c Fixup comments remove hard coded tagname 2025-10-18 13:22:36 +00:00
3abfaf591c feat(install)
Installation should now be taken from katenary.io
2025-10-18 13:22:36 +00:00
23 changed files with 735 additions and 98 deletions

View File

@@ -6,13 +6,13 @@ package main
import (
"fmt"
"log"
"os"
"strings"
"katenary.io/internal/generator"
"katenary.io/internal/generator/katenaryfile"
"katenary.io/internal/generator/labels"
"katenary.io/internal/logger"
"katenary.io/internal/utils"
"github.com/compose-spec/compose-go/v2/cli"
@@ -28,7 +28,7 @@ func main() {
rootCmd := buildRootCmd()
if err := rootCmd.Execute(); err != nil {
log.Fatal(err)
logger.Fatal(err)
}
}

View File

@@ -97,7 +97,8 @@ Katenary transforms compose services this way:
- environment variables will be stored inside a `ConfigMap`
- image, tags, and ingresses configuration are also stored in `values.yaml` file
- if named volumes are declared, Katenary create `PersistentVolumeClaims` - not enabled in values file
- `depends_on` needs that the pointed service declared a port. If not, you can use labels to inform Katenary
- `depends_on` uses Kubernetes API by default to check if the service endpoint is ready. No port required.
Use label `katenary.v3/depends-on: legacy` to use the old netcat method (requires port).
For any other specific configuration, like binding local files as `ConfigMap`, bind variables, add values with
documentation, etc. You'll need to use labels.
@@ -147,10 +148,8 @@ Katenary proposes a lot of labels to configure the helm chart generation, but so
### Work with Depends On?
Kubernetes does not provide service or pod starting detection from others pods. But Katenary will create `initContainer`
to make you able to wait for a service to respond. But you'll probably need to adapt a bit the compose file.
See this compose file:
Katenary creates `initContainer` to wait for dependent services to be ready. By default, it uses the Kubernetes API
to check if the service endpoint has ready addresses - no port required.
```yaml
version: "3"
@@ -167,9 +166,7 @@ services:
MYSQL_ROOT_PASSWORD: foobar
```
In this case, `webapp` needs to know the `database` port because the `depends_on` points on it and Kubernetes has not
(yet) solution to check the database startup. Katenary wants to create a `initContainer` to hit on the related service.
So, instead of exposing the port in the compose definition, let's declare this to Katenary with labels:
If you need the old netcat-based method (requires port), add the `katenary.v3/depends-on: legacy` label to the dependent service:
```yaml
version: "3"
@@ -179,14 +176,15 @@ services:
image: php:8-apache
depends_on:
- database
labels:
katenary.v3/depends-on: legacy
database:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: foobar
labels:
katenary.v3/ports: |-
- 3306
ports:
- 3306:3306
```
### Declare ingresses

View File

@@ -49,6 +49,7 @@ fi
# Where to download the binary
TAG=$(curl -sLf https://repo.katenary.io/api/v1/repos/katenary/katenary/releases/latest 2>/dev/null | grep -Po '"tag_name":\s*"[^"]*"' | cut -d ":" -f2 | tr -d '"')
TAG=${TAG#releases/}
# use the right names for the OS and architecture
if [ $ARCH = "x86_64" ]; then
@@ -57,6 +58,7 @@ fi
BIN_URL="https://repo.katenary.io/api/packages/Katenary/generic/katenary/$TAG/katenary-$OS-$ARCH"
echo
echo "Downloading $BIN_URL"

View File

@@ -2,7 +2,6 @@ package generator
import (
"fmt"
"log"
"maps"
"os"
"path/filepath"
@@ -331,12 +330,12 @@ func (chart *HelmChart) setSharedConf(service types.ServiceConfig, deployments m
}
fromservices, err := labelstructs.EnvFromFrom(service.Labels[labels.LabelEnvFrom])
if err != nil {
log.Fatal("error unmarshaling env-from label:", err)
logger.Fatal("error unmarshaling env-from label:", err)
}
// find the configmap in the chart templates
for _, fromservice := range fromservices {
if _, ok := chart.Templates[fromservice+".configmap.yaml"]; !ok {
log.Printf("configmap %s not found in chart templates", fromservice)
logger.Warnf("configmap %s not found in chart templates", fromservice)
continue
}
// find the corresponding target deployment
@@ -356,7 +355,7 @@ func (chart *HelmChart) setEnvironmentValuesFrom(service types.ServiceConfig, de
}
mapping, err := labelstructs.GetValueFrom(service.Labels[labels.LabelValuesFrom])
if err != nil {
log.Fatal("error unmarshaling values-from label:", err)
logger.Fatal("error unmarshaling values-from label:", err)
}
findDeployment := func(name string) *Deployment {
@@ -375,11 +374,11 @@ func (chart *HelmChart) setEnvironmentValuesFrom(service types.ServiceConfig, de
dep := findDeployment(depName[0])
target := findDeployment(service.Name)
if dep == nil || target == nil {
log.Fatalf("deployment %s or %s not found", depName[0], service.Name)
logger.Fatalf("deployment %s or %s not found", depName[0], service.Name)
}
container, index := utils.GetContainerByName(target.service.ContainerName, target.Spec.Template.Spec.Containers)
if container == nil {
log.Fatalf("Container %s not found", target.GetName())
logger.Fatalf("Container %s not found", target.GetName())
}
reourceName := fmt.Sprintf(`{{ include "%s.fullname" . }}-%s`, chart.Name, depName[0])
// add environment with from

View File

@@ -2,7 +2,6 @@ package generator
import (
"fmt"
"log"
"os"
"path/filepath"
"regexp"
@@ -69,7 +68,7 @@ func NewConfigMap(service types.ServiceConfig, appName string, forFile bool) *Co
// get the secrets from the labels
secrets, err := labelstructs.SecretsFrom(service.Labels[labels.LabelSecrets])
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
// drop the secrets from the environment
for _, secret := range secrets {
@@ -95,7 +94,7 @@ func NewConfigMap(service types.ServiceConfig, appName string, forFile bool) *Co
if l, ok := service.Labels[labels.LabelMapEnv]; ok {
envmap, err := labelstructs.MapEnvFrom(l)
if err != nil {
log.Fatal("Error parsing map-env", err)
logger.Fatal("Error parsing map-env", err)
}
for key, value := range envmap {
cm.AddData(key, strings.ReplaceAll(value, "__APP__", appName))
@@ -145,7 +144,7 @@ func NewConfigMapFromDirectory(service types.ServiceConfig, appName, path string
path = filepath.Join(service.WorkingDir, path)
path = filepath.Clean(path)
if err := cm.AppendDir(path); err != nil {
log.Fatal("Error adding files to configmap:", err)
logger.Fatal("Error adding files to configmap:", err)
}
return cm
}

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"errors"
"fmt"
"log"
"os"
"os/exec"
"path/filepath"
@@ -110,8 +109,19 @@ func Convert(config ConvertOptions, dockerComposeFile ...string) error {
// the current working directory is the directory
currentDir, _ := os.Getwd()
// Filter to only existing files before chdir
var existingFiles []string
for _, f := range dockerComposeFile {
if _, err := os.Stat(f); err == nil {
existingFiles = append(existingFiles, f)
}
}
if len(existingFiles) == 0 && len(dockerComposeFile) > 0 {
return fmt.Errorf("no compose file found: %v", dockerComposeFile)
}
// go to the root of the project
if err := os.Chdir(filepath.Dir(dockerComposeFile[0])); err != nil {
if err := os.Chdir(filepath.Dir(existingFiles[0])); err != nil {
logger.Failure(err.Error())
return err
}
@@ -123,12 +133,12 @@ func Convert(config ConvertOptions, dockerComposeFile ...string) error {
}()
// repove the directory part of the docker-compose files
for i, f := range dockerComposeFile {
dockerComposeFile[i] = filepath.Base(f)
for i, f := range existingFiles {
existingFiles[i] = filepath.Base(f)
}
// parse the compose files
project, err := parser.Parse(config.Profiles, config.EnvFiles, dockerComposeFile...)
project, err := parser.Parse(config.Profiles, config.EnvFiles, existingFiles...)
if err != nil {
logger.Failure("Cannot parse compose files", err.Error())
return err
@@ -596,7 +606,7 @@ func callHelmUpdate(config ConvertOptions) {
func removeNewlinesInsideBrackets(values []byte) []byte {
re, err := regexp.Compile(`(?s)\{\{(.*?)\}\}`)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
return re.ReplaceAllFunc(values, func(b []byte) []byte {
// get the first match
@@ -635,7 +645,7 @@ func writeContent(path string, content []byte) {
defer f.Close()
defer func() {
if _, err := f.Write(content); err != nil {
log.Fatal(err)
logger.Fatal(err)
}
}()
}

View File

@@ -1,11 +1,11 @@
package generator
import (
"log"
"strings"
"katenary.io/internal/generator/labels"
"katenary.io/internal/generator/labels/labelstructs"
"katenary.io/internal/logger"
"katenary.io/internal/utils"
"github.com/compose-spec/compose-go/v2/types"
@@ -33,7 +33,7 @@ func NewCronJob(service types.ServiceConfig, chart *HelmChart, appName string) (
}
mapping, err := labelstructs.CronJobFrom(labels)
if err != nil {
log.Fatalf("Error parsing cronjob labels: %s", err)
logger.Fatalf("Error parsing cronjob labels: %s", err)
return nil, nil
}

View File

@@ -2,7 +2,6 @@ package generator
import (
"fmt"
"log"
"os"
"path/filepath"
"regexp"
@@ -34,15 +33,16 @@ type ConfigMapMount struct {
// Deployment is a kubernetes Deployment.
type Deployment struct {
*appsv1.Deployment `yaml:",inline"`
chart *HelmChart `yaml:"-"`
configMaps map[string]*ConfigMapMount `yaml:"-"`
volumeMap map[string]string `yaml:"-"` // keep map of fixed named to original volume name
service *types.ServiceConfig `yaml:"-"`
defaultTag string `yaml:"-"`
isMainApp bool `yaml:"-"`
exchangesVolumes map[string]*labelstructs.ExchangeVolume `yaml:"-"`
boundEnvVar []string `yaml:"-"` // environement to remove
*appsv1.Deployment `yaml:",inline"`
chart *HelmChart `yaml:"-"`
configMaps map[string]*ConfigMapMount `yaml:"-"`
volumeMap map[string]string `yaml:"-"` // keep map of fixed named to original volume name
service *types.ServiceConfig `yaml:"-"`
defaultTag string `yaml:"-"`
isMainApp bool `yaml:"-"`
exchangesVolumes map[string]*labelstructs.ExchangeVolume `yaml:"-"`
boundEnvVar []string `yaml:"-"` // environement to remove
needsServiceAccount bool `yaml:"-"`
}
// NewDeployment creates a new Deployment from a compose service. The appName is the name of the application taken from the project name.
@@ -166,7 +166,7 @@ func (d *Deployment) AddHealthCheck(service types.ServiceConfig, container *core
if v, ok := service.Labels[labels.LabelHealthCheck]; ok {
probes, err := labelstructs.ProbeFrom(v)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
container.LivenessProbe = probes.LivenessProbe
container.ReadinessProbe = probes.ReadinessProbe
@@ -201,7 +201,7 @@ func (d *Deployment) AddVolumes(service types.ServiceConfig, appName string) {
if v, ok := service.Labels[labels.LabelConfigMapFiles]; ok {
binds, err := labelstructs.ConfigMapFileFrom(v)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
for _, bind := range binds {
tobind[bind] = true
@@ -263,19 +263,31 @@ func (d *Deployment) BindFrom(service types.ServiceConfig, binded *Deployment) {
// DependsOn adds a initContainer to the deployment that will wait for the service to be up.
func (d *Deployment) DependsOn(to *Deployment, servicename string) error {
// Add a initContainer with busybox:latest using netcat to check if the service is up
// it will wait until the service responds to all ports
logger.Info("Adding dependency from ", d.service.Name, " to ", to.service.Name)
useLegacy := false
if label, ok := d.service.Labels[labels.LabelDependsOn]; ok {
useLegacy = strings.ToLower(label) == "legacy"
}
if useLegacy {
return d.dependsOnLegacy(to, servicename)
}
d.needsServiceAccount = true
return d.dependsOnK8sAPI(to)
}
func (d *Deployment) dependsOnLegacy(to *Deployment, servicename string) error {
for _, container := range to.Spec.Template.Spec.Containers {
commands := []string{}
if len(container.Ports) == 0 {
logger.Warn("No ports found for service ",
logger.Fatal("No ports found for service ",
servicename,
". You should declare a port in the service or use "+
labels.LabelPorts+
" label.",
)
os.Exit(1)
}
for _, port := range container.Ports {
command := fmt.Sprintf("until nc -z %s %d; do\n sleep 1;\ndone", to.Name, port.ContainerPort)
@@ -293,6 +305,39 @@ func (d *Deployment) DependsOn(to *Deployment, servicename string) error {
return nil
}
func (d *Deployment) dependsOnK8sAPI(to *Deployment) error {
script := `NAMESPACE=${NAMESPACE:-default}
SERVICE=%s
KUBERNETES_SERVICE_HOST=${KUBERNETES_SERVICE_HOST:-kubernetes.default.svc}
KUBERNETES_SERVICE_PORT=${KUBERNETES_SERVICE_PORT:-443}
until wget -q -O- --header="Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
--cacert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
"https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE}" \
| grep -q '"ready":.*true'; do
sleep 2
done`
command := []string{"/bin/sh", "-c", fmt.Sprintf(script, to.Name)}
d.Spec.Template.Spec.InitContainers = append(d.Spec.Template.Spec.InitContainers, corev1.Container{
Name: "wait-for-" + to.service.Name,
Image: "busybox:latest",
Command: command,
Env: []corev1.EnvVar{
{
Name: "NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
},
})
return nil
}
// Filename returns the filename of the deployment.
func (d *Deployment) Filename() string {
return d.service.Name + ".deployment.yaml"
@@ -311,7 +356,7 @@ func (d *Deployment) SetEnvFrom(service types.ServiceConfig, appName string, sam
defer func() {
c, index := d.BindMapFilesToContainer(service, secrets, appName)
if c == nil || index == -1 {
log.Println("Container not found for service ", service.Name)
logger.Warn("Container not found for service ", service.Name)
return
}
d.Spec.Template.Spec.Containers[index] = *c
@@ -320,7 +365,7 @@ func (d *Deployment) SetEnvFrom(service types.ServiceConfig, appName string, sam
// secrets from label
labelSecrets, err := labelstructs.SecretsFrom(service.Labels[labels.LabelSecrets])
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
// values from label
@@ -335,7 +380,7 @@ func (d *Deployment) SetEnvFrom(service types.ServiceConfig, appName string, sam
_, ok := service.Environment[secret]
if !ok {
drop = append(drop, secret)
logger.Warn("Secret " + secret + " not found in service " + service.Name + " - skpped")
logger.Warn("Secret " + secret + " not found in service " + service.Name + " - skipped")
continue
}
secrets = append(secrets, secret)
@@ -352,7 +397,7 @@ func (d *Deployment) SetEnvFrom(service types.ServiceConfig, appName string, sam
val, ok := service.Environment[value]
if !ok {
drop = append(drop, value)
logger.Warn("Environment variable " + value + " not found in service " + service.Name + " - skpped")
logger.Warn("Environment variable " + value + " not found in service " + service.Name + " - skipped")
continue
}
if d.chart.Values[service.Name].(*Value).Environment == nil {
@@ -384,8 +429,8 @@ func (d *Deployment) BindMapFilesToContainer(service types.ServiceConfig, secret
if envSize > 0 {
if service.Name == "db" {
log.Println("Service ", service.Name, " has environment variables")
log.Println(service.Environment)
logger.Info("Service ", service.Name, " has environment variables")
logger.Info(service.Environment)
}
fromSources = append(fromSources, corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
@@ -568,7 +613,7 @@ func (d *Deployment) Yaml() ([]byte, error) {
}
// manage serviceAccount, add condition to use the serviceAccount from values.yaml
if strings.Contains(line, "serviceAccountName:") {
if strings.Contains(line, "serviceAccountName:") && !d.needsServiceAccount {
spaces = strings.Repeat(" ", utils.CountStartingSpaces(line))
pre := spaces + `{{- if ne .Values.` + serviceName + `.serviceAccount "" }}`
post := spaces + "{{- end }}"
@@ -604,6 +649,13 @@ func (d *Deployment) Yaml() ([]byte, error) {
return []byte(strings.Join(content, "\n")), nil
}
func (d *Deployment) SetServiceAccountName() {
if d.needsServiceAccount {
d.Spec.Template.Spec.ServiceAccountName = utils.TplName(d.service.Name, d.chart.Name, "dependency")
} else {
}
}
func (d *Deployment) appendDirectoryToConfigMap(service types.ServiceConfig, appName string, volume types.ServiceVolumeConfig) {
pathnme := utils.PathToName(volume.Source)
if _, ok := d.configMaps[pathnme]; !ok {
@@ -615,7 +667,7 @@ func (d *Deployment) appendDirectoryToConfigMap(service types.ServiceConfig, app
// TODO: make it recursive to add all files in the directory and subdirectories
_, err := os.ReadDir(volume.Source)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
cm := NewConfigMapFromDirectory(service, appName, volume.Source)
d.configMaps[pathnme] = &ConfigMapMount{
@@ -660,7 +712,7 @@ func (d *Deployment) appendFileToConfigMap(service types.ServiceConfig, appName
}
if err := cm.AppendFile(volume.Source); err != nil {
log.Fatal("Error adding file to configmap:", err)
logger.Fatal("Error adding file to configmap:", err)
}
}
@@ -721,7 +773,7 @@ func (d *Deployment) bindVolumes(volume types.ServiceVolumeConfig, tobind map[st
// Add volume to container
stat, err := os.Stat(volume.Source)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
if stat.IsDir() {

View File

@@ -3,6 +3,7 @@ package generator
import (
"fmt"
"os"
"slices"
"strings"
"testing"
@@ -11,6 +12,7 @@ import (
yaml3 "gopkg.in/yaml.v3"
v1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
"sigs.k8s.io/yaml"
)
@@ -142,6 +144,86 @@ services:
if len(dt.Spec.Template.Spec.InitContainers) != 1 {
t.Errorf("Expected 1 init container, got %d", len(dt.Spec.Template.Spec.InitContainers))
}
initContainer := dt.Spec.Template.Spec.InitContainers[0]
if !strings.Contains(initContainer.Image, "busybox") {
t.Errorf("Expected busybox image, got %s", initContainer.Image)
}
fullCommand := strings.Join(initContainer.Command, " ")
if !strings.Contains(fullCommand, "wget") {
t.Errorf("Expected wget command (K8s API method), got %s", fullCommand)
}
if !strings.Contains(fullCommand, "/api/v1/namespaces/") {
t.Errorf("Expected Kubernetes API call to /api/v1/namespaces/, got %s", fullCommand)
}
if !strings.Contains(fullCommand, "/endpoints/") {
t.Errorf("Expected Kubernetes API call to /endpoints/, got %s", fullCommand)
}
if len(initContainer.Env) == 0 {
t.Errorf("Expected environment variables to be set for namespace")
}
hasNamespace := false
for _, env := range initContainer.Env {
if env.Name == "NAMESPACE" && env.ValueFrom != nil && env.ValueFrom.FieldRef != nil {
if env.ValueFrom.FieldRef.FieldPath == "metadata.namespace" {
hasNamespace = true
break
}
}
}
if !hasNamespace {
t.Errorf("Expected NAMESPACE env var with metadata.namespace fieldRef")
}
}
func TestDependsOnLegacy(t *testing.T) {
composeFile := `
services:
web:
image: nginx:1.29
ports:
- 80:80
depends_on:
- database
labels:
katenary.v3/depends-on: legacy
database:
image: mariadb:10.5
ports:
- 3306:3306
`
tmpDir := setup(composeFile)
defer teardown(tmpDir)
currentDir, _ := os.Getwd()
os.Chdir(tmpDir)
defer os.Chdir(currentDir)
output := internalCompileTest(t, "-s", webTemplateOutput)
dt := v1.Deployment{}
if err := yaml.Unmarshal([]byte(output), &dt); err != nil {
t.Errorf(unmarshalError, err)
}
if len(dt.Spec.Template.Spec.InitContainers) != 1 {
t.Errorf("Expected 1 init container, got %d", len(dt.Spec.Template.Spec.InitContainers))
}
initContainer := dt.Spec.Template.Spec.InitContainers[0]
if !strings.Contains(initContainer.Image, "busybox") {
t.Errorf("Expected busybox image, got %s", initContainer.Image)
}
fullCommand := strings.Join(initContainer.Command, " ")
if !strings.Contains(fullCommand, "nc") {
t.Errorf("Expected nc (netcat) command for legacy method, got %s", fullCommand)
}
}
func TestHelmDependencies(t *testing.T) {
@@ -563,3 +645,192 @@ services:
t.Errorf("Expected command to be 'bar baz', got %s", strings.Join(command, " "))
}
}
func TestRestrictedRBACGeneration(t *testing.T) {
composeFile := `
services:
web:
image: nginx:1.29
ports:
- 80:80
depends_on:
- database
database:
image: mariadb:10.5
ports:
- 3306:3306
`
tmpDir := setup(composeFile)
defer teardown(tmpDir)
currentDir, _ := os.Getwd()
os.Chdir(tmpDir)
defer os.Chdir(currentDir)
rbacOutput := internalCompileTest(t, "-s", "templates/web/depends-on.rbac.yaml")
docs := strings.Split(rbacOutput, "---\n")
// Filter out empty documents and strip helm template comments
var filteredDocs []string
for _, doc := range docs {
if strings.TrimSpace(doc) != "" {
// Remove '# Source:' comment lines that helm template adds
lines := strings.Split(doc, "\n")
var contentLines []string
for _, line := range lines {
if !strings.HasPrefix(strings.TrimSpace(line), "# Source:") {
contentLines = append(contentLines, line)
}
}
filteredDocs = append(filteredDocs, strings.Join(contentLines, "\n"))
}
}
if len(filteredDocs) != 3 {
t.Fatalf("Expected 3 YAML documents in RBAC file, got %d (filtered from %d)", len(filteredDocs), len(docs))
}
var sa corev1.ServiceAccount
if err := yaml.Unmarshal([]byte(strings.TrimSpace(filteredDocs[0])), &sa); err != nil {
t.Errorf("Failed to unmarshal ServiceAccount: %v", err)
}
if sa.Kind != "ServiceAccount" {
t.Errorf("Expected Kind=ServiceAccount, got %s", sa.Kind)
}
if !strings.Contains(sa.Name, "web") {
t.Errorf("Expected ServiceAccount name to contain 'web', got %s", sa.Name)
}
var role rbacv1.Role
if err := yaml.Unmarshal([]byte(strings.TrimSpace(filteredDocs[1])), &role); err != nil {
t.Errorf("Failed to unmarshal Role: %v", err)
}
if role.Kind != "Role" {
t.Errorf("Expected Kind=Role, got %s", role.Kind)
}
if len(role.Rules) != 1 {
t.Errorf("Expected 1 rule in Role, got %d", len(role.Rules))
}
rule := role.Rules[0]
if !contains(rule.APIGroups, "") {
t.Error("Expected APIGroup to include core API ('')")
}
if !contains(rule.Resources, "endpoints") {
t.Errorf("Expected Resource to include 'endpoints', got %v", rule.Resources)
}
for _, res := range rule.Resources {
if res == "*" {
t.Error("Role should not have wildcard (*) resource permissions")
}
}
for _, verb := range rule.Verbs {
if verb == "*" {
t.Error("Role should not have wildcard (*) verb permissions")
}
}
var rb rbacv1.RoleBinding
if err := yaml.Unmarshal([]byte(strings.TrimSpace(filteredDocs[2])), &rb); err != nil {
t.Errorf("Failed to unmarshal RoleBinding: %v", err)
}
if rb.Kind != "RoleBinding" {
t.Errorf("Expected Kind=RoleBinding, got %s", rb.Kind)
}
if len(rb.Subjects) != 1 {
t.Errorf("Expected 1 subject in RoleBinding, got %d", len(rb.Subjects))
}
if rb.Subjects[0].Kind != "ServiceAccount" {
t.Errorf("Expected Subject Kind=ServiceAccount, got %s", rb.Subjects[0].Kind)
}
// Helm template renders the name, so check if it contains "web"
if !strings.Contains(rb.RoleRef.Name, "web") {
t.Errorf("Expected RoleRef Name to contain 'web', got %s", rb.RoleRef.Name)
}
if rb.RoleRef.Kind != "Role" {
t.Errorf("Expected RoleRef Kind=Role, got %s", rb.RoleRef.Kind)
}
}
func TestDeploymentReferencesServiceAccount(t *testing.T) {
composeFile := `
services:
web:
image: nginx:1.29
ports:
- 80:80
depends_on:
- database
database:
image: mariadb:10.5
ports:
- 3306:3306
`
tmpDir := setup(composeFile)
defer teardown(tmpDir)
currentDir, _ := os.Getwd()
os.Chdir(tmpDir)
defer os.Chdir(currentDir)
output := internalCompileTest(t, "-s", "templates/web/deployment.yaml")
var dt v1.Deployment
if err := yaml.Unmarshal([]byte(output), &dt); err != nil {
t.Errorf("Failed to unmarshal Deployment: %v", err)
}
serviceAccountName := dt.Spec.Template.Spec.ServiceAccountName
if !strings.Contains(serviceAccountName, "web") {
t.Errorf("Expected ServiceAccountName to contain 'web', got %s", serviceAccountName)
}
if len(dt.Spec.Template.Spec.InitContainers) == 0 {
t.Fatal("Expected at least one init container for depends_on")
}
initContainer := dt.Spec.Template.Spec.InitContainers[0]
if initContainer.Name != "wait-for-database" {
t.Errorf("Expected init container name 'wait-for-database', got %s", initContainer.Name)
}
fullCommand := strings.Join(initContainer.Command, " ")
if !strings.Contains(fullCommand, "wget") {
t.Error("Expected init container to use wget for K8s API calls")
}
if !strings.Contains(fullCommand, "/api/v1/namespaces/") {
t.Error("Expected init container to call /api/v1/namespaces/ endpoint")
}
if !strings.Contains(fullCommand, "/endpoints/") {
t.Error("Expected init container to access /endpoints/ resource")
}
hasNamespace := false
for _, env := range initContainer.Env {
if env.Name == "NAMESPACE" && env.ValueFrom != nil && env.ValueFrom.FieldRef != nil {
if env.ValueFrom.FieldRef.FieldPath == "metadata.namespace" {
hasNamespace = true
break
}
}
}
if !hasNamespace {
t.Error("Expected NAMESPACE env var with metadata.namespace fieldRef")
}
_, err := os.Stat("./chart/templates/web/depends-on.rbac.yaml")
if os.IsNotExist(err) {
t.Error("RBAC file depends-on.rbac.yaml should exist for service using depends_on with K8s API")
} else if err != nil {
t.Errorf("Unexpected error checking RBAC file: %v", err)
}
}
func contains(slice []string, item string) bool {
return slices.Contains(slice, item)
}

View File

@@ -4,12 +4,12 @@ import (
"bytes"
_ "embed"
"fmt"
"log"
"sort"
"strings"
"text/template"
"gopkg.in/yaml.v3"
"katenary.io/internal/logger"
)
//go:embed readme.tpl
@@ -50,7 +50,7 @@ func ReadMeFile(charname, description string, values map[string]any) string {
vv := map[string]any{}
out, _ := yaml.Marshal(values)
if err := yaml.Unmarshal(out, &vv); err != nil {
log.Printf("Error parsing values: %s", err)
logger.Warnf("Error parsing values: %s", err)
}
result := make(map[string]string)

View File

@@ -3,7 +3,6 @@ package generator
import (
"bytes"
"fmt"
"log"
"regexp"
"strings"
@@ -23,7 +22,7 @@ import (
// The Generate function will create the HelmChart object this way:
//
// - Detect the service port name or leave the port number if not found.
// - Create a deployment for each service that are not ingnore.
// - Create a deployment for each service that are not ingore.
// - Create a service and ingresses for each service that has ports and/or declared ingresses.
// - Create a PVC or Configmap volumes for each volume.
// - Create init containers for each service which has dependencies to other services.
@@ -135,6 +134,12 @@ func Generate(project *types.Project) (*HelmChart, error) {
}
}
}
// set ServiceAccountName for deployments that need it
for _, d := range deployments {
d.SetServiceAccountName()
}
for _, name := range drops {
delete(deployments, name)
}
@@ -143,9 +148,14 @@ func Generate(project *types.Project) (*HelmChart, error) {
chart.setEnvironmentValuesFrom(s, deployments)
}
// generate RBAC resources for services that need K8s API access (non-legacy depends_on)
if err := chart.generateRBAC(deployments); err != nil {
logger.Fatalf("error generating RBAC: %s", err)
}
// generate configmaps with environment variables
if err := chart.generateConfigMapsAndSecrets(project); err != nil {
log.Fatalf("error generating configmaps and secrets: %s", err)
logger.Fatalf("error generating configmaps and secrets: %s", err)
}
// if the env-from label is set, we need to add the env vars from the configmap
@@ -280,7 +290,7 @@ func addStaticVolumes(deployments map[string]*Deployment, service types.ServiceC
var d *Deployment
var ok bool
if d, ok = deployments[service.Name]; !ok {
log.Printf("service %s not found in deployments", service.Name)
logger.Warnf("service %s not found in deployments", service.Name)
return
}
@@ -292,7 +302,7 @@ func addStaticVolumes(deployments map[string]*Deployment, service types.ServiceC
var y []byte
var err error
if y, err = config.configMap.Yaml(); err != nil {
log.Fatal(err)
logger.Fatal(err)
}
// add the configmap to the chart
@@ -434,13 +444,65 @@ func samePodVolume(service types.ServiceConfig, v types.ServiceVolumeConfig, dep
// check if it has the same volume
for _, tv := range target.Spec.Template.Spec.Volumes {
if tv.Name == v.Source {
log.Printf("found same pod volume %s in deployment %s and %s", tv.Name, service.Name, targetDeployment)
logger.Warnf("found same pod volume %s in deployment %s and %s", tv.Name, service.Name, targetDeployment)
return true
}
}
return false
}
// generateRBAC creates RBAC resources (ServiceAccount, Role, RoleBinding) for services that need K8s API access.
// A service needs RBAC if it has non-legacy depends_on relationships.
func (chart *HelmChart) generateRBAC(deployments map[string]*Deployment) error {
serviceMap := make(map[string]bool)
for _, d := range deployments {
if !d.needsServiceAccount {
continue
}
sa := NewServiceAccount(*d.service, chart.Name)
role := NewRestrictedRole(*d.service, chart.Name)
rb := NewRestrictedRoleBinding(*d.service, chart.Name)
var buf bytes.Buffer
saYaml, err := yaml.Marshal(sa.ServiceAccount)
if err != nil {
return fmt.Errorf("error marshaling ServiceAccount for %s: %w", d.service.Name, err)
}
buf.Write(saYaml)
buf.WriteString("---\n")
roleYaml, err := yaml.Marshal(role.Role)
if err != nil {
return fmt.Errorf("error marshaling Role for %s: %w", d.service.Name, err)
}
buf.Write(roleYaml)
buf.WriteString("---\n")
rbYaml, err := yaml.Marshal(rb.RoleBinding)
if err != nil {
return fmt.Errorf("error marshaling RoleBinding for %s: %w", d.service.Name, err)
}
buf.Write(rbYaml)
filename := d.service.Name + "/depends-on.rbac.yaml"
chart.Templates[filename] = &ChartTemplate{
Content: buf.Bytes(),
Servicename: d.service.Name,
}
serviceMap[d.service.Name] = true
}
for svcName := range serviceMap {
logger.Log(logger.IconPackage, "Creating RBAC", svcName)
}
return nil
}
func fixContainerNames(project *types.Project) {
// fix container names to be unique
for i, service := range project.Services {

View File

@@ -1,11 +1,11 @@
package generator
import (
"log"
"strings"
"katenary.io/internal/generator/labels"
"katenary.io/internal/generator/labels/labelstructs"
"katenary.io/internal/logger"
"katenary.io/internal/utils"
"github.com/compose-spec/compose-go/v2/types"
@@ -36,7 +36,7 @@ func NewIngress(service types.ServiceConfig, Chart *HelmChart) *Ingress {
mapping, err := labelstructs.IngressFrom(label)
if err != nil {
log.Fatalf("Failed to parse ingress label: %s\n", err)
logger.Fatalf("Failed to parse ingress label: %s\n", err)
}
if mapping.Hostname == "" {
mapping.Hostname = service.Name + ".tld"

View File

@@ -3,7 +3,6 @@ package katenaryfile
import (
"bytes"
"encoding/json"
"log"
"os"
"reflect"
"strings"
@@ -67,7 +66,7 @@ func OverrideWithConfig(project *types.Project) {
return
}
if err := yaml.NewDecoder(fp).Decode(&services); err != nil {
log.Fatal(err)
logger.Fatal(err)
return
}
for _, p := range project.Services {
@@ -79,7 +78,7 @@ func OverrideWithConfig(project *types.Project) {
}
err := getLabelContent(o, &s, labelName)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
project.Services[name] = s
}
@@ -113,7 +112,7 @@ func getLabelContent(o any, service *types.ServiceConfig, labelName string) erro
c, err := yaml.Marshal(o)
if err != nil {
log.Println(err)
logger.Failure(err.Error())
return err
}
val := strings.TrimSpace(string(c))
@@ -121,7 +120,7 @@ func getLabelContent(o any, service *types.ServiceConfig, labelName string) erro
// special case, values must be set from some defaults
ing, err := labelstructs.IngressFrom(val)
if err != nil {
log.Fatal(err)
logger.Fatal(err)
return err
}
c, err := yaml.Marshal(ing)

View File

@@ -4,13 +4,13 @@ import (
"bytes"
_ "embed"
"fmt"
"log"
"regexp"
"sort"
"strings"
"text/tabwriter"
"text/template"
"katenary.io/internal/logger"
"katenary.io/internal/utils"
"sigs.k8s.io/yaml"
@@ -36,6 +36,7 @@ const (
LabelEnvFrom Label = KatenaryLabelPrefix + "/env-from"
LabelExchangeVolume Label = KatenaryLabelPrefix + "/exchange-volumes"
LabelValuesFrom Label = KatenaryLabelPrefix + "/values-from"
LabelDependsOn Label = KatenaryLabelPrefix + "/depends-on"
)
var (
@@ -134,7 +135,7 @@ func GetLabelHelpFor(labelname string, asMarkdown bool) string {
KatenaryPrefix: KatenaryLabelPrefix,
})
if err != nil {
log.Fatalf("Error executing template: %v", err)
logger.Fatalf("Error executing template: %v", err)
}
help.Long = buf.String()
buf.Reset()
@@ -145,7 +146,7 @@ func GetLabelHelpFor(labelname string, asMarkdown bool) string {
KatenaryPrefix: KatenaryLabelPrefix,
})
if err != nil {
log.Fatalf("Error executing template: %v", err)
logger.Fatalf("Error executing template: %v", err)
}
help.Example = buf.String()
buf.Reset()
@@ -160,7 +161,7 @@ func GetLabelHelpFor(labelname string, asMarkdown bool) string {
KatenaryPrefix: KatenaryLabelPrefix,
})
if err != nil {
log.Fatalf("Error executing template: %v", err)
logger.Fatalf("Error executing template: %v", err)
}
return buf.String()

View File

@@ -355,4 +355,25 @@
DB_USER: database.MARIADB_USER
DB_PASSWORD: database.MARIADB_PASSWORD
"depends-on":
short: "Method to check if a service is ready (for depends_on)."
long: |-
When a service uses `depends_on`, Katenary creates an initContainer to wait
for the dependent service to be ready.
By default, Katenary uses the Kubernetes API to check if the service endpoint
has ready addresses. This method does not require the service to expose a port.
Set this label to `legacy` to use the old netcat method that requires a port
to be defined for the dependent service.
example: |-
web:
image: nginx
depends_on:
- database
labels:
# Use legacy netcat method (requires port)
{{ .KatenaryPrefix }}/depends-on: legacy
type: "string"
# vim: ft=gotmpl.yaml

View File

@@ -2,10 +2,10 @@ package labelstructs
import (
"encoding/json"
"log"
"gopkg.in/yaml.v3"
corev1 "k8s.io/api/core/v1"
"katenary.io/internal/logger"
)
type HealthCheck struct {
@@ -24,13 +24,13 @@ func ProbeFrom(data string) (*HealthCheck, error) {
if livenessProbe, ok := tmp["livenessProbe"]; ok {
livenessProbeBytes, err := json.Marshal(livenessProbe)
if err != nil {
log.Printf("Error marshalling livenessProbe: %v", err)
logger.Warnf("Error marshalling livenessProbe: %v", err)
return nil, err
}
livenessProbe := &corev1.Probe{}
err = json.Unmarshal(livenessProbeBytes, livenessProbe)
if err != nil {
log.Printf("Error unmarshalling livenessProbe: %v", err)
logger.Warnf("Error unmarshalling livenessProbe: %v", err)
return nil, err
}
mapping.LivenessProbe = livenessProbe
@@ -39,13 +39,13 @@ func ProbeFrom(data string) (*HealthCheck, error) {
if readinessProbe, ok := tmp["readinessProbe"]; ok {
readinessProbeBytes, err := json.Marshal(readinessProbe)
if err != nil {
log.Printf("Error marshalling readinessProbe: %v", err)
logger.Warnf("Error marshalling readinessProbe: %v", err)
return nil, err
}
readinessProbe := &corev1.Probe{}
err = json.Unmarshal(readinessProbeBytes, readinessProbe)
if err != nil {
log.Printf("Error unmarshalling readinessProbe: %v", err)
logger.Warnf("Error unmarshalling readinessProbe: %v", err)
return nil, err
}
mapping.ReadinessProbe = readinessProbe

View File

@@ -32,7 +32,7 @@ func NewRBAC(service types.ServiceConfig, appName string) *RBAC {
APIVersion: "rbac.authorization.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: utils.TplName(service.Name, appName),
Name: utils.TplName(service.Name, appName, "dependency"),
Labels: GetLabels(service.Name, appName),
Annotations: Annotations,
},
@@ -128,6 +128,79 @@ func (r *Role) Yaml() ([]byte, error) {
}
}
// NewServiceAccount creates a new ServiceAccount from a compose service.
func NewServiceAccount(service types.ServiceConfig, appName string) *ServiceAccount {
return &ServiceAccount{
ServiceAccount: &corev1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceAccount",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: utils.TplName(service.Name, appName),
Labels: GetLabels(service.Name, appName),
Annotations: Annotations,
},
},
service: &service,
}
}
// NewRestrictedRole creates a Role with minimal permissions for init containers.
func NewRestrictedRole(service types.ServiceConfig, appName string) *Role {
return &Role{
Role: &rbacv1.Role{
TypeMeta: metav1.TypeMeta{
Kind: "Role",
APIVersion: "rbac.authorization.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: utils.TplName(service.Name, appName, "dependency"),
Labels: GetLabels(service.Name, appName),
Annotations: Annotations,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"endpoints"},
Verbs: []string{"get", "list", "watch"},
},
},
},
service: &service,
}
}
// NewRestrictedRoleBinding creates a RoleBinding that binds the restricted role to the ServiceAccount.
func NewRestrictedRoleBinding(service types.ServiceConfig, appName string) *RoleBinding {
return &RoleBinding{
RoleBinding: &rbacv1.RoleBinding{
TypeMeta: metav1.TypeMeta{
Kind: "RoleBinding",
APIVersion: "rbac.authorization.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: utils.TplName(service.Name, appName, "dependency"),
Labels: GetLabels(service.Name, appName),
Annotations: Annotations,
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: utils.TplName(service.Name, appName, "dependency"),
Namespace: "{{ .Release.Namespace }}",
},
},
RoleRef: rbacv1.RoleRef{
Kind: "Role",
Name: utils.TplName(service.Name, appName, "dependency"),
APIGroup: "rbac.authorization.k8s.io",
},
},
service: &service,
}
}
// ServiceAccount is a kubernetes ServiceAccount.
type ServiceAccount struct {
*corev1.ServiceAccount

View File

@@ -1,11 +1,11 @@
package generator
import (
"log"
"os"
"os/exec"
"testing"
"katenary.io/internal/logger"
"katenary.io/internal/parser"
)
@@ -23,7 +23,7 @@ func setup(content string) string {
func teardown(tmpDir string) {
// remove the temporary directory
log.Println("Removing temporary directory: ", tmpDir)
logger.Info("Removing temporary directory: ", tmpDir)
if err := os.RemoveAll(tmpDir); err != nil {
panic(err)
}
@@ -59,7 +59,7 @@ func compileTest(t *testing.T, force bool, options ...string) string {
ChartVersion: chartVersion,
}
if err := Convert(convertOptions, "compose.yml"); err != nil {
log.Printf("Failed to convert: %s", err)
logger.Warnf("Failed to convert: %s", err)
return err.Error()
}

View File

@@ -1,6 +1,11 @@
// Package logger provides simple logging functions with icons and colors.
package logger
import (
"fmt"
"os"
)
// Icon is a unicode icon
type Icon string
@@ -22,30 +27,91 @@ const (
const reset = "\033[0m"
// Print prints a message without icon.
func Print(msg ...any) {
fmt.Print(msg...)
}
// Printf prints a formatted message without icon.
func Printf(format string, msg ...any) {
fmt.Printf(format, msg...)
}
// Info prints an informational message.
func Info(msg ...any) {
message("", IconInfo, msg...)
}
// Infof prints a formatted informational message.
func Infof(format string, msg ...any) {
message("", IconInfo, fmt.Sprintf(format, msg...))
}
// Warn prints a warning message.
func Warn(msg ...any) {
orange := "\033[38;5;214m"
message(orange, IconWarning, msg...)
}
// Warnf prints a formatted warning message.
func Warnf(format string, msg ...any) {
orange := "\033[38;5;214m"
message(orange, IconWarning, fmt.Sprintf(format, msg...))
}
// Success prints a success message.
func Success(msg ...any) {
green := "\033[38;5;34m"
message(green, IconSuccess, msg...)
}
// Successf prints a formatted success message.
func Successf(format string, msg ...any) {
green := "\033[38;5;34m"
message(green, IconSuccess, fmt.Sprintf(format, msg...))
}
// Failure prints a failure message.
func Failure(msg ...any) {
red := "\033[38;5;196m"
message(red, IconFailure, msg...)
}
// Failuref prints a formatted failure message.
func Failuref(format string, msg ...any) {
red := "\033[38;5;196m"
message(red, IconFailure, fmt.Sprintf(format, msg...))
}
// Log prints a message with a custom icon.
func Log(icon Icon, msg ...any) {
message("", icon, msg...)
}
// Logf prints a formatted message with a custom icon.
func Logf(icon Icon, format string, msg ...any) {
message("", icon, fmt.Sprintf(format, msg...))
}
func fatal(red string, icon Icon, msg ...any) {
fmt.Print(icon, " ", red)
fmt.Print(msg...)
fmt.Println(reset)
os.Exit(1)
}
func fatalf(red string, icon Icon, format string, msg ...any) {
fatal(red, icon, fmt.Sprintf(format, msg...))
}
// Fatal prints a fatal error message and exits with code 1.
func Fatal(msg ...any) {
red := "\033[38;5;196m"
fatal(red, IconFailure, msg...)
}
// Fatalf prints a fatal error message with formatting and exits with code 1.
func Fatalf(format string, msg ...any) {
red := "\033[38;5;196m"
fatalf(red, IconFailure, format, msg...)
}

View File

@@ -0,0 +1,79 @@
package logger
import (
"testing"
)
func TestIcons(t *testing.T) {
tests := []struct {
name string
got Icon
expected Icon
}{
{"IconSuccess", IconSuccess, "✅"},
{"IconFailure", IconFailure, "❌"},
{"IconWarning", IconWarning, "❕"},
{"IconNote", IconNote, "📝"},
{"IconWorld", IconWorld, "🌐"},
{"IconPlug", IconPlug, "🔌"},
{"IconPackage", IconPackage, "📦"},
{"IconCabinet", IconCabinet, "🗄️"},
{"IconInfo", IconInfo, "🔵"},
{"IconSecret", IconSecret, "🔒"},
{"IconConfig", IconConfig, "🔧"},
{"IconDependency", IconDependency, "🔗"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if tt.got != tt.expected {
t.Errorf("got %q, want %q", tt.got, tt.expected)
}
})
}
}
func TestInfo(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("Info panicked: %v", r)
}
}()
Info("test message")
}
func TestWarn(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("Warn panicked: %v", r)
}
}()
Warn("test warning")
}
func TestSuccess(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("Success panicked: %v", r)
}
}()
Success("test success")
}
func TestFailure(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("Failure panicked: %v", r)
}
}()
Failure("test failure")
}
func TestLog(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("Log panicked: %v", r)
}
}()
Log(IconInfo, "test log")
}

View File

@@ -3,11 +3,11 @@ package parser
import (
"context"
"log"
"path/filepath"
"github.com/compose-spec/compose-go/v2/cli"
"github.com/compose-spec/compose-go/v2/types"
"katenary.io/internal/logger"
)
func init() {
@@ -37,20 +37,25 @@ func Parse(profiles []string, envFiles []string, dockerComposeFile ...string) (*
var err error
envFiles[i], err = filepath.Abs(envFiles[i])
if err != nil {
log.Fatal(err)
logger.Fatal(err)
}
}
options, err := cli.NewProjectOptions(nil,
opts := []cli.ProjectOptionsFn{
cli.WithProfiles(profiles),
cli.WithInterpolation(true),
cli.WithDefaultConfigPath,
cli.WithEnvFiles(envFiles...),
cli.WithOsEnv,
cli.WithDotEnv,
cli.WithNormalization(true),
cli.WithResolvedPaths(false),
)
}
if len(dockerComposeFile) == 0 {
opts = append(opts, cli.WithDefaultConfigPath)
}
options, err := cli.NewProjectOptions(dockerComposeFile, opts...)
if err != nil {
return nil, err
}

View File

@@ -1,10 +1,11 @@
package parser
import (
"log"
"os"
"path/filepath"
"testing"
"katenary.io/internal/logger"
)
const composeFile = `
@@ -27,7 +28,7 @@ func setupTest() (string, error) {
func tearDownTest(tmpDir string) {
if tmpDir != "" {
if err := os.RemoveAll(tmpDir); err != nil {
log.Fatalf("Failed to remove temporary directory %s: %s", tmpDir, err.Error())
logger.Fatalf("Failed to remove temporary directory %s: %s", tmpDir, err.Error())
}
}
}

View File

@@ -3,7 +3,6 @@ package utils
import (
"bytes"
"fmt"
"log"
"path/filepath"
"strings"
@@ -133,8 +132,8 @@ func GetValuesFromLabel(service types.ServiceConfig, LabelValues string) map[str
labelContent := []any{}
err := yaml.Unmarshal([]byte(v), &labelContent)
if err != nil {
log.Printf("Error parsing label %s: %s", v, err)
log.Fatal(err)
logger.Warnf("Error parsing label %s: %s", v, err)
logger.Fatal(err)
}
for _, value := range labelContent {
@@ -150,7 +149,7 @@ func GetValuesFromLabel(service types.ServiceConfig, LabelValues string) map[str
descriptions[k.(string)] = &EnvConfig{Service: service, Description: v.(string)}
}
default:
log.Fatalf("Unknown type in label: %s %T", LabelValues, value)
logger.Fatalf("Unknown type in label: %s %T", LabelValues, value)
}
}
}
@@ -171,7 +170,7 @@ func Confirm(question string, icon ...logger.Icon) bool {
}
var response string
if _, err := fmt.Scanln(&response); err != nil {
log.Fatalf("Error parsing response: %s", err.Error())
logger.Fatalf("Error parsing response: %s", err.Error())
}
return strings.ToLower(response) == "y"
}