Convert Vault Public Key to JWK

Hashicorp Vault Transit Engine API allows signing of data and verification of signatures. It offers more features like encryption/decryption, hashing, random bytes generation, but in this post we’re concerned only with the signing part of the API and more specifically with converting public keys returned by the Vault API to JWK format.

Example response when fetching a public key from Vault:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
curl -X GET https://myvault.com/v1/transit/keys/keyname

{
    "data":
    {
        "keys":
        {
            "1":
            {
                "creation_time": "2022-06-13T12:21:27.257274629Z",
                "name": "P-256",
                "public_key": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAERTx/2cyYcGVSIRP/826S32BiZxSg\nnzyXgRYmKP8N2l26ec/MwCdsHIEyraX1ZYqwMUT4wO9fqFiGsRKyMBpPnQ==\n-----END PUBLIC KEY-----\n"
            }
        },
        "latest_version": 1,
        "type": "ecdsa-p256"
    }
}

Convert public key to JWK

Some keys like ED25519 are just random bytes and are returned as such, while other key types like ECDSA and RSA have some structure and are returned as PEM encoded keys.

Below is an example function that creates a jose.JSONWebKey from a Vault public key.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
package example

import (
    "crypto/ed25519"
    "crypto/x509"
    "encoding/pem"
    "fmt"

    "github.com/square/go-jose/v3"
)	

func jwkFromPEM(keyname string, keytype string, pemKey string) (*jose.JSONWebKey, error) {
    var key interface{}
	
    switch keytype {
    case "ed25519":
        key = ed25519.PublicKey(pemKey)
    case "ecdsa-p256", "ecdsa-p384", "ecdsa-p521", "rsa-2048", "rsa-4096":
        block, _ := pem.Decode([]byte(pemKey))
        if block == nil {
            return nil, fmt.Errorf("no public key found during PEM decode")
        }
		
        var err error
        key, err = x509.ParsePKIXPublicKey(block.Bytes)
        if err != nil {
            return nil, err
        }
    default:
        return nil, fmt.Errorf("unsupported key type: %s", keytype)
    }
	
    return &jose.JSONWebKey{
        KeyID: keyname,
        Key: key,
    }, nil
}

Example JSON encoding of a converted JWK ECDSA-P256 public key would look like:

1
2
3
4
5
6
7
{
    "kid": "keyname",
    "kty": "EC",
    "crv": "P-256",
    "x": "RTx_2cyYcGVSIRP_826S32BiZxSgnzyXgRYmKP8N2l0",
    "y": "unnPzMAnbByBMq2l9WWKsDFE-MDvX6hYhrESsjAaT50"
}

Gitlab Subgroups and Go Modules Names

I sometimes see people who use private Gitlab subgroups to name their Go
modules (mostly libraries) with a suffix .git

Let’s see an example. If we have a Go module project at the following imaginary location:

1
https://gitlab.com/myuser/foogroup/barlib

Now when running go get gitlab.com/myuser/foogroup/barlib from another Go project, even though the SSH keys are set correctly, we may get an error:

1
The project you were looking for could not be found or you don't have permissions to view it.

We start googling and reading stackoverflow and it seems that a possible solution is to rename the module to gitlab.com/myuser/foogroup/barlib.git … and the issue looks fixed.

1
2
3
4
5
6
rm go.mod go.sum
go mod init gitlab.com/myuser/foogroup/barlib.git
go mod tidy

cd /go/src/another_project
go get gitlab.com/myuser/foogroup/barlib.git

Also at this point people start to doubt the goodness of the Go tools (go mod and go get especially) and mark this as a stupid annoying (Go) bug.

The solution is simple and the reason for the issue too and it’s not a bug.

The reason behind that permissions error

Why this happens is that go get tries to discover the modules at a given path in order to find the requested Go module repository. Only after the repository is found, the tools will do git clone or git checkout and the SSH keys will be used for authentication. The issue comes down to the fact that private Gitlab subgroups cannot be listed/viewed without a Gitlab Access Token.

The fix is Gitlab Access Token

Go to Gitlab->Preferences->Access Tokens and create one. Then create (or edit) a .netrc file in your home folder: ~/.netrc The contents of the file are below:

1
2
3
machine gitlab.com
    login my_gitlab_username
    password my_gitlab_token

And we may not need to name our Go modules with a .git suffix anymore.

Update RETURNING with Gorm

In some situations it’s not obvious how to use the RETURNING SQL clause with Gorm and databases that support it, like PostgreSQL or CockroachDB.

The particular use case where I found it challenging is the following: I want to update multiple records with a WHERE clause, ORDER BY clause and LIMIT clause and immediately return the updated records without a second query.

The simplified CockroachDB table schema without indices and noise is:

1
2
3
4
5
  column_name | data_type | is_nullable | column_default  
--------------+-----------+-------------+----------------
  field1      | INT8      |    true     | NULL                                  
  field2      | INT8      |    true     | NULL                                  
  created_at  | INT8      |    true     | NULL                                  

My initial attempts were as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
type Record struct {
	Field1    int
	Field2    int
	CreatedAt int64
}         

func update(field1 int, field2 int, limit int) ([]Record, error) {
    var records []Record
    err := db.Model(&records).
    	Table("table_name").
        Clauses(clause.Returning{}).
        Where("field1 = ?", field1).
    	Updates(map[string]interface{}{"field2":field2}).
    	Order("created_at ASC").
    	Limit(limit).
    	Error
    if err != nil {
        return nil, err
    }
    
    return records, nil
}

Gorm builds the SQL statement successfully, but for some reason doesn’t include the ORDER BY and LIMIT clauses, which are very important in my particular case. Ok, let’s try with a raw SQL statement.

1
2
3
4
5
6
7
8
func update(field1 int, field2 int, limit int) ([]Record, error) {
    var records []Record
    sql := `UPDATE table_name SET field2 = $1 WHERE field1 = $2 ORDER BY created_at ASC LIMIT $3 RETURNING *`
    if err := db.Model(&records).Table("table_name").Exec(sql, field1, field2, limit).Error; err != nil {
        return nil, err
    }
    return records, nil
}

Gorm executes the complete correct query, but I couldn’t find a way to return/populate the records slice - it was always empty (tried with different arrangements and examples from the docs). Without any deep investigation on why that happens, it seems that Gorm just ignores the RETURNING data when executing raw SQL statements.

What finally worked for me seems a bit like a hack, but it does the job. By using the ORDER BY and LIMIT clauses directly in the WHERE clause, the query works correctly as I need it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
func update(field1 int, field2 int, limit int) ([]Record, error) {
    var records []Record
    err := db.Model(&records).
    	Table("table_name").
    	Clauses(clause.Returning{}).
    	Where("field1 = $1 ORDER BY created_at LIMIT $2", field1, limit).
    	Updates(map[string]interface{}{"field2": field2}).
    	Error
    if err != nil {
        return nil, err
    }
    return records, nil

The examples use Updates with map[string]interface{} but the issue described here is not affected by that. Using struct or other ways for updating produces the same results.

Graceful Shutdown for Go HTTP servers

A simple Go implementation for graceful shutdown of HTTP servers with Go 1.8+ wrapped in ~20 lines of code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
func main() {
	srv := &http.Server{
		Addr:    ":8080",
		Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
			time.Sleep(5 * time.Second)
			_, _ = w.Write([]byte("hello"))
		}),
	}

	if err := gracefulShutdown(srv, 10*time.Second); err != nil {
		log.Println(err)
	}
}

// gracefulShutdown stops the given HTTP server on
// receiving a stop signal and waits for the active connections
// to be closed for {timeout} period of time.
func gracefulShutdown(srv *http.Server, timeout time.Duration) error {
	done := make(chan error, 1)
	go func() {
		c := make(chan os.Signal, 1)
		signal.Notify(c, os.Interrupt, syscall.SIGTERM)
		<-c

		ctx := context.Background()
		var cancel context.CancelFunc
		if timeout > 0 {
			ctx, cancel = context.WithTimeout(ctx, timeout)
			defer cancel()
		}

		done <- srv.Shutdown(ctx)
	}()

	if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
		return err
	}

	return <-done
}

The graceful shutdown function can also be used directly as a package from github.com: Graceful Shutdown

info

You should use Go 1.8+

The function doesn’t support TLS servers currently as I don’t need it, but it should be trivial to add that.

Simple GitLab CI for Go

Example for running unit tests with coverage output

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
image: golang:1.16

stages:
  - test

before_script:
  - ln -s /builds /go/src/gitlab.com
  - cd /go/src/gitlab.com/${CI_PROJECT_PATH}

unit tests:
  stage: test
  script:
    - go version
    - go test -race $(go list ./... | grep -v /vendor/) -coverprofile=coverage.out
    - go tool cover -func=coverage.out

To vendor, or not to vendor

Should we commit vendor in our Go projects?

Since the introduction of Go modules, the prevalent opinion on the internet is that we should not commit the vendor directory of our projects.

The argument against committing vendor is that it’s unnecessary, because the Go tools can resolve and find dependencies when needed on-the-fly. All public Go modules are also hosted by various Go Proxy servers for redundancy.

While I agree that it’s technically unnecessary and bloats the repo, I’d like to give some arguments in favour of committing vendor.

Here is a list of arguments for committing vendor:

  • Building the project doesn’t depend on some code being available on Github/Gitlab/… or the Go Proxy servers. Open source projects may disappear because of censorship, authors incentives, licensing changes or some other reasons I can’t currently think of. Not in your repo, not your code.

  • We may use internal or 3rd party Go modules (private) which may also disappear or become inaccessible, but if they are committed in vendor, they are part of our project and nothing breaks unexpectedly.

  • Go modules may not follow semantic versioning, which means the Go tools will rely on the latest commit hash when fetching them on-the-fly. Repo history may be rewritten (e.g. rebase) and you, a colleague or your CI job may end up with different code for the dependencies they use.

  • Committing vendor can improve your code review process. Typically, I’m used to committing dependency changes in a separate commit, so they can be easily viewed if the reviewer is curious.

Here’s an interesting observation related to bloating the repo. If I make code review, and a team member has included a new dependency with 300 files (or updated one with 300 files changed), I would be pretty curious to deep dive into that and start a discussion about the code quality, the need for this change or alternative Go modules. This may lead us to actually decrease our binary size and overall complexity.

If I just see a single new line in go.mod in new MR, chances are I won’t even think about it.

  • CI/CD jobs which perform compilation and build steps need not waste time and network to download the dependencies every time the CI job is executed. All needed dependencies are local and present (go build -mod vendor)

  • As a pseudo argument I can ask why important open source Go projects like Kubernetes are committing vendor, if it’s not necessary?

Conclusion

For my own peace of mind I would always prefer to commit the vendor directory alongside go.mod and go.sum while at the same time try to keep the project dependencies at minimum.

warning

The vendor directory SHOULD NOT be committed for Go libraries, frameworks or code which doesn’t produce executable binaries, unless you have a really good reason to do it.

Bash script preserve command-line arguments

If we want to preserve (pass) the command line arguments when running a bash script and pass them to a program defined in the script, we can use the $* pattern after the call. An example bash script called myscript.sh is shown below. It sets some environment variables (optional) and starts a program called myserver. The program will accept all command line args passed to the bash script as if they were passed to the myserver program directly.

1
2
3
4
5
#!/bin/bash

SOME_ENV_VARIABLE=var1 \
ANOTHER_ENV_VARIABLE=var2 \
myserver $*

Calling the script like:

1
./myscript.sh -addr http://example.com -port 8080

will be like calling the myserver program like:

1
./myserver -addr http://example.com -port 8080

Nano syntax highlight for Go

The following syntax highlighting rules are placed inside the ~/.nanorc file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
## Go Syntax Highlighting
##
syntax "go" "\.go$"

## Types n stuff
color green "\<((u)?int(8|16|32|64)?|float(32|64)|byte|string|interface|bool)\>"
color yellow "\<(package|import|const|var|type|struct|func|chan|defer)\>"
color yellow "\<(for|range|if|else|case|default|switch)\>"
color magenta "\<(continue|break|return)\>"

## Strings
color cyan "".*""
color cyan "'.'"

## Comment highlighting
color brightblue "//.*"
color brightblue start="/\*" end="\*/"

## Trailing whitespace
color ,green "[[:space:]]+$"

Deobfuscate Android Stack Traces Manually

tip

The retrace tool is inside <android-sdk>/tools/proguard/bin folder.

Place the stack traces in a file and run the following command:

1
./retrace.sh [-verbose] /path/to/mapping.txt /path/to/stacktrace.txt