mirror of
https://github.com/slimtoolkit/slim.git
synced 2025-06-03 04:00:23 +00:00
initial version of basic image merge and code cleanup
Signed-off-by: Kyle Quest <kcq.public@gmail.com>
This commit is contained in:
parent
ed31b87c2f
commit
871a9771a0
16
README.md
16
README.md
@ -161,6 +161,8 @@ Elixir application images:
|
||||
- [`LINT` COMMAND OPTIONS](#lint-command-options)
|
||||
- [`XRAY` COMMAND OPTIONS](#xray-command-options)
|
||||
- [`BUILD` COMMAND OPTIONS](#build-command-options)
|
||||
- [`DEBUG` COMMAND OPTIONS](#debug-command-options)
|
||||
- [`REGISTRY` COMMAND OPTIONS](#registry-command-options)
|
||||
- [RUNNING CONTAINERIZED](#running-containerized)
|
||||
- [DOCKER CONNECT OPTIONS](#docker-connect-options)
|
||||
- [HTTP PROBE COMMANDS](#http-probe-commands)
|
||||
@ -274,7 +276,7 @@ Powered by Slim. It will help you understand and troubleshoot your application c
|
||||
|
||||
## BASIC USAGE INFO
|
||||
|
||||
`slim [global flags] [lint|xray|build|profile|update|version|help] [command-specific flags] <IMAGE_ID_OR_NAME>`
|
||||
`slim [global flags] [lint|xray|build|profile|debug|update|version|help] [command-specific flags] <IMAGE_ID_OR_NAME>`
|
||||
|
||||
If you don't specify any command `slim` will start in the interactive prompt mode.
|
||||
|
||||
@ -283,7 +285,7 @@ If you don't specify any command `slim` will start in the interactive prompt mod
|
||||
- `xray` - Performs static analysis for the target container image (including 'reverse engineering' the Dockerfile for the image). Use this command if you want to know what's inside of your container image and what makes it fat.
|
||||
- `lint` - Analyzes container instructions in Dockerfiles (Docker image support is WIP)
|
||||
- `build` - Analyzes, profiles and optimizes your container image generating the supported security profiles. This is the most popular command.
|
||||
- `debug` - Debug the running target container. This command is useful for troubleshooting the running target container.
|
||||
- `debug` - Debug the running target container. This command is useful for troubleshooting running containers created from minimal/minified or regular container images.
|
||||
- `registry` - Execute registry operations.
|
||||
- `profile` - Performs basic container image analysis and dynamic container analysis, but it doesn't generate an optimized image.
|
||||
- `run` - Runs one or more containers (for now runs a single container similar to `docker run`)
|
||||
@ -548,8 +550,9 @@ The `--dockerfile` option makes it possible to build a new minified image direct
|
||||
The `--use-local-mounts` option is used to choose how the Slim sensor is added to the target container and how the sensor artifacts are delivered back to the master. If you enable this option you'll get the original Slim app behavior where it uses local file system volume mounts to add the sensor executable and to extract the artifacts from the target container. This option doesn't always work as expected in the dockerized environment where Slim itself is running in a Docker container. When this option is disabled (default behavior) then a separate Docker volume is used to mount the sensor and the sensor artifacts are explicitly copied from the target container.
|
||||
|
||||
### `DEBUG` COMMAND OPTIONS
|
||||
|
||||
- `--debug-image` - you can debug target conatiner image using `--debug-image` flag. The default value for this flag is `nicolaka/netshoot`.
|
||||
- `--target` - you can specify the target docker container or it's name/ID (not docker image name/ID) using the `--target`. Note that the target container must be running. You can use the `docker run` command to start the target container.
|
||||
- `--target` - you can specify the target docker container or its name/ID (not docker image name/ID) using the `--target`. Note that the target container must be running. You can use the `docker run` command to start the target container.
|
||||
- `--help` show help (default: false)
|
||||
|
||||
### `REGISTRY` COMMAND OPTIONS
|
||||
@ -726,6 +729,8 @@ You can use the `--http-probe-exec` and `--http-probe-exec-file` options to run
|
||||
|
||||
## DEBUGGING MINIFIED CONTAINERS
|
||||
|
||||
### Debugging the "Hard Way"
|
||||
|
||||
You can create dedicated debugging side-car container images loaded with the tools you need for debugging target containers. This allows you to keep your production container images small. The debugging side-car containers attach to the running target containers.
|
||||
|
||||
Assuming you have a running container named `node_app_alpine` you can attach your debugging side-car with a command like this: `docker run --rm -it --pid=container:node_app_alpine --net=container:node_app_alpine --cap-add sys_admin alpine sh`. In this example, the debugging side-car is a regular alpine image. This is exactly what happens with the `node_alpine` app sample (located in the `node_alpine` directory of the `examples` repo) and the `run_debug_sidecar.command` helper script.
|
||||
@ -751,11 +756,12 @@ drwxr-xr-x 3 root root 4.0K Sep 2 15:51 node_modules
|
||||
|
||||
Some of the useful debugging commands include `cat /proc/<TARGET_PID>/cmdline`, `ls -l /proc/<TARGET_PID>/cwd`, `cat /proc/1/environ`, `cat /proc/<TARGET_PID>/limits`, `cat /proc/<TARGET_PID>/status` and `ls -l /proc/<TARGET_PID>/fd`.
|
||||
|
||||
### Example
|
||||
### Debugging Using the `debug` Command
|
||||
|
||||
The `debug` command is pretty basic and it does require the target container you are debugging has ipc sharable namespace. By default, in Docker containers are started with the IPC namespace being "non-sharable". A simple note is to start the target container using the docker run with the `--ipc 'shareable'` flag. The main mode for the debug command is to interact with the debugged target image through the `slim debug` command through terminal/interface.
|
||||
|
||||
### Steps to debug your container (nginx example) -
|
||||
#### Steps to debug your container (nginx example)
|
||||
|
||||
1. Start the target container you want to debug (it doesn't need to be minified)
|
||||
2. Run the debug command
|
||||
|
||||
|
1
go.mod
1
go.mod
@ -40,6 +40,7 @@ require (
|
||||
github.com/Microsoft/go-winio v0.6.0 // indirect
|
||||
github.com/PuerkitoBio/purell v1.1.1 // indirect
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.2.0 // indirect
|
||||
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
github.com/distribution/distribution/v3 v3.0.0-20210316161203-a01c71e2477e // indirect
|
||||
|
3
go.sum
3
go.sum
@ -132,8 +132,11 @@ github.com/c4milo/unpackit v0.0.0-20170704181138-4ed373e9ef1c h1:aprLqMn7gSPT+vd
|
||||
github.com/c4milo/unpackit v0.0.0-20170704181138-4ed373e9ef1c/go.mod h1:Ie6SubJv/NTO9Q0UBH0QCl3Ve50lu9hjbi5YJUw03TE=
|
||||
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
|
||||
github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
|
@ -192,7 +192,7 @@ func OnCommand(
|
||||
|
||||
xc.Out.Error("docker.connect.error", exitMsg)
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -578,7 +578,7 @@ func OnCommand(
|
||||
"value": overrides.Network,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECBadNetworkName
|
||||
exitCode := commands.ECTCommon | commands.ECCBadNetworkName
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -596,7 +596,7 @@ func OnCommand(
|
||||
"value": overrides.Network,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECBadNetworkName
|
||||
exitCode := commands.ECTCommon | commands.ECCBadNetworkName
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -62,7 +62,7 @@ func inspectFatImage(
|
||||
"message": "make sure the target image already exists locally (use --pull flag to auto-download it from registry)",
|
||||
})
|
||||
|
||||
exitCode := commands.ECTBuild | ecbImageBuildError
|
||||
exitCode := commands.ECTCommon | commands.ECCImageNotFound
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -66,6 +66,7 @@ type GenericParams struct {
|
||||
ClientConfig *config.DockerClient
|
||||
}
|
||||
|
||||
// TODO: spread these code types across all command definition, so it's not all defined here
|
||||
// Exit Code Types
|
||||
const (
|
||||
ECTCommon = 0x01000000
|
||||
@ -76,13 +77,15 @@ const (
|
||||
ectVersion = 0x06000000
|
||||
ECTXray = 0x07000000
|
||||
ECTRun = 0x08000000
|
||||
ECTMerge = 0x09000000
|
||||
)
|
||||
|
||||
// Build command exit codes
|
||||
const (
|
||||
ecOther = iota + 1
|
||||
ECNoDockerConnectInfo
|
||||
ECBadNetworkName
|
||||
ECCOther = iota + 1
|
||||
ECCImageNotFound
|
||||
ECCNoDockerConnectInfo
|
||||
ECCBadNetworkName
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -49,7 +49,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -49,7 +49,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -55,7 +55,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -49,7 +49,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -24,6 +24,7 @@ var CLI = &cli.Command{
|
||||
Flags: []cli.Flag{
|
||||
cflag(FlagImage),
|
||||
cflag(FlagUseLastImageMetadata),
|
||||
cflag(FlagTag),
|
||||
},
|
||||
Action: func(ctx *cli.Context) error {
|
||||
if ctx.Args().Len() < 1 {
|
||||
@ -55,14 +56,16 @@ var CLI = &cli.Command{
|
||||
}
|
||||
|
||||
type CommandParams struct {
|
||||
FirstImage string `json:"first_image"`
|
||||
LastImage string `json:"last_image"`
|
||||
UseLastImageMetadata bool `json:"use_last_image_metadata"`
|
||||
FirstImage string `json:"first_image"`
|
||||
LastImage string `json:"last_image"`
|
||||
UseLastImageMetadata bool `json:"use_last_image_metadata"`
|
||||
OutputTags []string `json:"output_tags"`
|
||||
}
|
||||
|
||||
func CommandFlagValues(xc *app.ExecutionContext, ctx *cli.Context) (*CommandParams, error) {
|
||||
values := &CommandParams{
|
||||
UseLastImageMetadata: ctx.Bool(FlagUseLastImageMetadata),
|
||||
OutputTags: ctx.StringSlice(FlagTag),
|
||||
}
|
||||
|
||||
images := ctx.StringSlice(FlagImage)
|
||||
|
@ -9,12 +9,14 @@ import (
|
||||
const (
|
||||
FlagImage = "image"
|
||||
FlagUseLastImageMetadata = "use-last-image-metadata"
|
||||
FlagTag = "tag"
|
||||
)
|
||||
|
||||
// Merge command flag usage info
|
||||
const (
|
||||
FlagImageUsage = "Image to merge (flag instance position determines the merge order)"
|
||||
FlagUseLastImageMetadataUsage = "Use only the last image metadata for the merged image"
|
||||
FlagTagUsage = "Custom tags for the output image"
|
||||
)
|
||||
|
||||
var Flags = map[string]cli.Flag{
|
||||
@ -30,6 +32,12 @@ var Flags = map[string]cli.Flag{
|
||||
Usage: FlagUseLastImageMetadataUsage,
|
||||
EnvVars: []string{"DSLIM_MERGE_USE_LAST_IMAGE_META"},
|
||||
},
|
||||
FlagTag: &cli.StringSliceFlag{
|
||||
Name: FlagTag,
|
||||
Value: cli.NewStringSlice(),
|
||||
Usage: FlagTagUsage,
|
||||
EnvVars: []string{"DSLIM_TARGET_TAG"},
|
||||
},
|
||||
}
|
||||
|
||||
func cflag(name string) cli.Flag {
|
||||
|
@ -1,17 +1,28 @@
|
||||
package merge
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/master/commands"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/master/inspectors/image"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/master/version"
|
||||
"github.com/docker-slim/docker-slim/pkg/command"
|
||||
"github.com/docker-slim/docker-slim/pkg/docker/dockerclient"
|
||||
"github.com/docker-slim/docker-slim/pkg/imagebuilder"
|
||||
"github.com/docker-slim/docker-slim/pkg/imagebuilder/internalbuilder"
|
||||
"github.com/docker-slim/docker-slim/pkg/imagereader"
|
||||
"github.com/docker-slim/docker-slim/pkg/report"
|
||||
"github.com/docker-slim/docker-slim/pkg/util/errutil"
|
||||
"github.com/docker-slim/docker-slim/pkg/util/fsutil"
|
||||
v "github.com/docker-slim/docker-slim/pkg/version"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const appName = commands.AppName
|
||||
@ -28,14 +39,19 @@ func OnCommand(
|
||||
|
||||
viChan := version.CheckAsync(gparams.CheckVersion, gparams.InContainer, gparams.IsDSImage)
|
||||
|
||||
cmdReport := report.NewEditCommand(gparams.ReportLocation, gparams.InContainer)
|
||||
cmdReport := report.NewMergeCommand(gparams.ReportLocation, gparams.InContainer)
|
||||
cmdReport.State = command.StateStarted
|
||||
cmdReport.FirstImage = cparams.FirstImage
|
||||
cmdReport.LastImage = cparams.LastImage
|
||||
cmdReport.UseLastImageMetadata = cparams.UseLastImageMetadata
|
||||
|
||||
xc.Out.State("started")
|
||||
xc.Out.Info("params",
|
||||
ovars{
|
||||
"image.first": cparams.FirstImage,
|
||||
"image.last": cparams.LastImage,
|
||||
"image.first": cparams.FirstImage,
|
||||
"image.last": cparams.LastImage,
|
||||
"use.last.image.metadata": cparams.UseLastImageMetadata,
|
||||
"output.tags": cparams.OutputTags,
|
||||
})
|
||||
|
||||
client, err := dockerclient.New(gparams.ClientConfig)
|
||||
@ -50,7 +66,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -59,12 +75,234 @@ func OnCommand(
|
||||
})
|
||||
xc.Exit(exitCode)
|
||||
}
|
||||
errutil.FailOn(err)
|
||||
xc.FailOn(err)
|
||||
|
||||
if gparams.Debug {
|
||||
version.Print(xc, cmdName, logger, client, false, gparams.InContainer, gparams.IsDSImage)
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////
|
||||
ensureImage := func(name string, imageRef string, cr *report.MergeCommand) string {
|
||||
imageInspector, err := image.NewInspector(client, imageRef)
|
||||
xc.FailOn(err)
|
||||
|
||||
if imageInspector.NoImage() {
|
||||
xc.Out.Error(fmt.Sprintf("%s.image.not.found", name), "make sure the target image already exists locally")
|
||||
|
||||
cmdReport.State = command.StateError
|
||||
exitCode := commands.ECTCommon | commands.ECCImageNotFound
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
})
|
||||
xc.Exit(exitCode)
|
||||
}
|
||||
|
||||
return imageInspector.ImageRef
|
||||
}
|
||||
|
||||
//and refresh the image refs
|
||||
cparams.FirstImage = ensureImage("first", cmdReport.FirstImage, cmdReport)
|
||||
cmdReport.FirstImage = cparams.FirstImage
|
||||
|
||||
//and refresh the image refs
|
||||
cparams.LastImage = ensureImage("last", cmdReport.LastImage, cmdReport)
|
||||
cmdReport.LastImage = cparams.LastImage
|
||||
|
||||
outputTags := cparams.OutputTags
|
||||
if len(outputTags) == 0 {
|
||||
var outputName string
|
||||
if strings.Contains(cparams.LastImage, ":") {
|
||||
parts := strings.SplitN(cparams.LastImage, ":", 2)
|
||||
outputName = fmt.Sprintf("%s.merged:%s", parts[0], parts[1])
|
||||
} else {
|
||||
outputName = fmt.Sprintf("%s.merged", cparams.LastImage)
|
||||
}
|
||||
outputTags = append(outputTags, outputName)
|
||||
}
|
||||
|
||||
fiReader, err := imagereader.New(cparams.FirstImage)
|
||||
xc.FailOn(err)
|
||||
liReader, err := imagereader.New(cparams.LastImage)
|
||||
xc.FailOn(err)
|
||||
|
||||
xc.Out.State("image.metadata.merge.start")
|
||||
fiImageConfig, err := fiReader.ImageConfig()
|
||||
xc.FailOn(err)
|
||||
liImageConfig, err := liReader.ImageConfig()
|
||||
xc.FailOn(err)
|
||||
|
||||
var outImageConfig *imagebuilder.ImageConfig
|
||||
if cparams.UseLastImageMetadata {
|
||||
outImageConfig = liImageConfig
|
||||
} else {
|
||||
imageConfig := *liImageConfig
|
||||
|
||||
//merge environment variables (todo: do a better job merging envs, need to parse k/v)
|
||||
envMap := map[string]struct{}{}
|
||||
for _, v := range fiImageConfig.Config.Env {
|
||||
envMap[v] = struct{}{}
|
||||
}
|
||||
for _, v := range liImageConfig.Config.Env {
|
||||
envMap[v] = struct{}{}
|
||||
}
|
||||
|
||||
imageConfig.Config.Env = []string{}
|
||||
for k := range envMap {
|
||||
imageConfig.Config.Env = append(imageConfig.Config.Env, k)
|
||||
}
|
||||
|
||||
//merge labels
|
||||
labelMap := map[string]string{}
|
||||
for k, v := range fiImageConfig.Config.Labels {
|
||||
labelMap[k] = v
|
||||
}
|
||||
for k, v := range liImageConfig.Config.Labels {
|
||||
labelMap[k] = v
|
||||
}
|
||||
|
||||
imageConfig.Config.Labels = labelMap
|
||||
|
||||
//merge exposed ports
|
||||
portMap := map[string]struct{}{}
|
||||
for k := range fiImageConfig.Config.ExposedPorts {
|
||||
portMap[k] = struct{}{}
|
||||
}
|
||||
for k := range liImageConfig.Config.ExposedPorts {
|
||||
portMap[k] = struct{}{}
|
||||
}
|
||||
|
||||
imageConfig.Config.ExposedPorts = portMap
|
||||
|
||||
//merge volumes
|
||||
volumeMap := map[string]struct{}{}
|
||||
for k := range fiImageConfig.Config.Volumes {
|
||||
volumeMap[k] = struct{}{}
|
||||
}
|
||||
for k := range liImageConfig.Config.Volumes {
|
||||
volumeMap[k] = struct{}{}
|
||||
}
|
||||
|
||||
imageConfig.Config.Volumes = volumeMap
|
||||
|
||||
//Merging OnBuild requires the instruction order to be preserved
|
||||
//Auto-merging OnBuild instructions is not always ideal because
|
||||
//of the potential side effects if the merged images are not very compatible.
|
||||
//Merging minified images of the same source image should have no side effects
|
||||
//because the OnBuild instructions will be identical.
|
||||
sameLists := func(first, second []string) bool {
|
||||
if len(first) != len(second) {
|
||||
return false
|
||||
}
|
||||
|
||||
for idx := range first {
|
||||
if first[idx] != second[idx] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
if !sameLists(fiImageConfig.Config.OnBuild, liImageConfig.Config.OnBuild) {
|
||||
var onBuild []string
|
||||
onBuild = append(onBuild, fiImageConfig.Config.OnBuild...)
|
||||
onBuild = append(onBuild, liImageConfig.Config.OnBuild...)
|
||||
imageConfig.Config.OnBuild = onBuild
|
||||
}
|
||||
|
||||
outImageConfig = &imageConfig
|
||||
}
|
||||
|
||||
xc.Out.State("image.metadata.merge.done")
|
||||
xc.Out.State("image.data.merge.start")
|
||||
|
||||
fiDataTarName, err := fiReader.ExportFilesystem()
|
||||
xc.FailOn(err)
|
||||
|
||||
liDataTarName, err := liReader.ExportFilesystem()
|
||||
xc.FailOn(err)
|
||||
|
||||
f1, err := os.Open(fiDataTarName)
|
||||
xc.FailOn(err)
|
||||
defer f1.Close()
|
||||
|
||||
index, err := tarMapFromFile(f1)
|
||||
xc.FailOn(err)
|
||||
|
||||
f2, err := os.Open(liDataTarName)
|
||||
xc.FailOn(err)
|
||||
defer f2.Close()
|
||||
|
||||
index2, err := tarMapFromFile(f2)
|
||||
xc.FailOn(err)
|
||||
|
||||
fmt.Printf("Updating tar map with first tar data...\n")
|
||||
for p, info := range index2 {
|
||||
other, found := index[p]
|
||||
if !found {
|
||||
index[p] = info
|
||||
continue
|
||||
}
|
||||
|
||||
if info.Header.Typeflag == other.Header.Typeflag &&
|
||||
info.Header.Size == other.Header.Size &&
|
||||
info.Hash == other.Hash {
|
||||
//can/should also check info.Header.Mode and info.Header.ModTime
|
||||
//if info.Header.ModTime.After(other.Header.ModTime) {
|
||||
// info.Replaced = append(other.Replaced, other)
|
||||
// index[p] = info
|
||||
// continue
|
||||
//}
|
||||
|
||||
other.Dups++
|
||||
continue
|
||||
}
|
||||
|
||||
info.Replaced = append(other.Replaced, other)
|
||||
index[p] = info
|
||||
}
|
||||
|
||||
outTarFileName, err := tarFromMap(logger, "", index)
|
||||
|
||||
if !fsutil.Exists(outTarFileName) ||
|
||||
!fsutil.IsRegularFile(outTarFileName) ||
|
||||
!fsutil.IsTarFile(outTarFileName) {
|
||||
xc.FailOn(fmt.Errorf("bad output tar - %s", outTarFileName))
|
||||
}
|
||||
|
||||
xc.Out.State("image.data.merge.done")
|
||||
xc.Out.State("output.image.generate.start")
|
||||
|
||||
ibo, err := imagebuilder.SimpleBuildOptionsFromImageConfig(outImageConfig)
|
||||
xc.FailOn(err)
|
||||
|
||||
ibo.Tags = outputTags
|
||||
|
||||
layerInfo := imagebuilder.LayerDataInfo{
|
||||
Type: imagebuilder.TarSource,
|
||||
Source: outTarFileName,
|
||||
Params: &imagebuilder.DataParams{
|
||||
TargetPath: "/",
|
||||
},
|
||||
}
|
||||
|
||||
ibo.Layers = append(ibo.Layers, layerInfo)
|
||||
|
||||
engine, err := internalbuilder.New(
|
||||
false, //show build logs doShowBuildLogs,
|
||||
true, //push to daemon - TODO: have a param to control this later
|
||||
//output image tar (if not 'saving' to daemon)
|
||||
false)
|
||||
xc.FailOn(err)
|
||||
|
||||
err = engine.Build(*ibo)
|
||||
xc.FailOn(err)
|
||||
|
||||
ensureImage("output", outputTags[0], cmdReport)
|
||||
xc.Out.State("output.image.generate.done")
|
||||
//////////////////////////////////////////////////
|
||||
|
||||
xc.Out.State("completed")
|
||||
cmdReport.State = command.StateCompleted
|
||||
xc.Out.State("done")
|
||||
@ -80,3 +318,136 @@ func OnCommand(
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type tfInfo struct {
|
||||
FileIndex uint32
|
||||
Header *tar.Header
|
||||
Hash uint64
|
||||
File *os.File
|
||||
DataOffset int64
|
||||
Dups uint32 //to count duplicates (can have extra field to track tar file metadata later)
|
||||
Replaced []*tfInfo
|
||||
}
|
||||
|
||||
func tarMapFromFile(f *os.File) (map[string]*tfInfo, error) {
|
||||
tr := tar.NewReader(f)
|
||||
tarMap := map[string]*tfInfo{}
|
||||
|
||||
var fileIndex uint32
|
||||
for {
|
||||
th, err := tr.Next()
|
||||
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
break
|
||||
}
|
||||
|
||||
fmt.Println(err)
|
||||
return tarMap, err
|
||||
}
|
||||
|
||||
if th == nil {
|
||||
fmt.Println("skipping empty tar header...")
|
||||
continue
|
||||
}
|
||||
|
||||
offset, err := f.Seek(0, os.SEEK_CUR)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return tarMap, err
|
||||
}
|
||||
|
||||
sr := io.NewSectionReader(f, offset, th.Size)
|
||||
|
||||
hash := xxhash.New()
|
||||
//if _, err := io.Copy(hash, tr); err != nil {
|
||||
if _, err := io.Copy(hash, sr); err != nil {
|
||||
//_, err = io.CopyN(hash, sr, th.Size)
|
||||
log.Fatalf("Failed to compute hash: %v", err)
|
||||
}
|
||||
hashValue := hash.Sum64()
|
||||
|
||||
//NOTE:
|
||||
//Not exposing the archived file data right now
|
||||
//because it'll require to read/load the data into memory
|
||||
//and for big images it'll be a lot of data.
|
||||
//For now just re-read the data when needed.
|
||||
|
||||
tarMap[th.Name] = &tfInfo{
|
||||
FileIndex: fileIndex,
|
||||
Header: th,
|
||||
Hash: hashValue,
|
||||
File: f, //tar file ref (not the file inside tar)
|
||||
DataOffset: offset, //offset in tar file
|
||||
}
|
||||
|
||||
fileIndex++
|
||||
}
|
||||
|
||||
return tarMap, nil
|
||||
}
|
||||
|
||||
func tarFromMap(logger *log.Entry, outputPath string, tarMap map[string]*tfInfo) (string, error) {
|
||||
var out *os.File
|
||||
|
||||
if outputPath == "" {
|
||||
tarFile, err := os.CreateTemp("", "image-output-*.tar")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
out = tarFile
|
||||
} else {
|
||||
tarFile, err := os.Create(outputPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
out = tarFile
|
||||
}
|
||||
|
||||
defer out.Close()
|
||||
|
||||
// Create a new tar archive
|
||||
tw := tar.NewWriter(out)
|
||||
defer tw.Close()
|
||||
|
||||
// Iterate over the input files
|
||||
for filePath, info := range tarMap {
|
||||
logger.Tracef("%s -> %+v\n", filePath, info)
|
||||
|
||||
if err := tw.WriteHeader(info.Header); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if info.Header.Size == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if info.DataOffset < 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
sr := io.NewSectionReader(info.File, info.DataOffset, info.Header.Size)
|
||||
if _, err := io.Copy(tw, sr); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
return out.Name(), nil
|
||||
}
|
||||
|
||||
func TarTypeName(flag byte) string {
|
||||
switch flag {
|
||||
case tar.TypeDir:
|
||||
return "dir"
|
||||
case tar.TypeReg, tar.TypeRegA:
|
||||
return "file"
|
||||
case tar.TypeSymlink:
|
||||
return "symlink"
|
||||
case tar.TypeLink:
|
||||
return "hardlink"
|
||||
default:
|
||||
return fmt.Sprintf("%v", flag)
|
||||
}
|
||||
}
|
||||
|
@ -51,7 +51,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -107,7 +107,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -129,7 +129,7 @@ func OnCommand(
|
||||
"value": overrides.Network,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECBadNetworkName
|
||||
exitCode := commands.ECTCommon | commands.ECCBadNetworkName
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -146,7 +146,7 @@ func OnCommand(
|
||||
"value": overrides.Network,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECBadNetworkName
|
||||
exitCode := commands.ECTCommon | commands.ECCBadNetworkName
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -56,7 +56,7 @@ func OnPullCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -132,7 +132,7 @@ func OnPushCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -190,7 +190,7 @@ func OnCopyCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -60,7 +60,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -43,7 +43,7 @@ func OnCommand(
|
||||
"message": exitMsg,
|
||||
})
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -33,7 +33,6 @@ type ovars = app.OutVars
|
||||
// Xray command exit codes
|
||||
const (
|
||||
ecxOther = iota + 1
|
||||
ecxImageNotFound
|
||||
)
|
||||
|
||||
const (
|
||||
@ -125,7 +124,7 @@ func OnCommand(
|
||||
|
||||
xc.Out.Error("docker.connect.error", exitMsg)
|
||||
|
||||
exitCode := commands.ECTCommon | commands.ECNoDockerConnectInfo
|
||||
exitCode := commands.ECTCommon | commands.ECCNoDockerConnectInfo
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
@ -157,7 +156,7 @@ func OnCommand(
|
||||
} else {
|
||||
xc.Out.Error("image.not.found", "make sure the target image already exists locally (use --pull flag to auto-download it from registry)")
|
||||
|
||||
exitCode := commands.ECTBuild | ecxImageNotFound
|
||||
exitCode := commands.ECTCommon | commands.ECCImageNotFound
|
||||
xc.Out.State("exited",
|
||||
ovars{
|
||||
"exit.code": exitCode,
|
||||
|
@ -17,10 +17,10 @@ import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifacts"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifact"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/controlled"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/execution"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/standalone"
|
||||
"github.com/docker-slim/docker-slim/pkg/appbom"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/event"
|
||||
@ -148,7 +148,7 @@ func Run() {
|
||||
if len(*logFile) > 0 {
|
||||
artifactsExtra = append(artifactsExtra, *logFile)
|
||||
}
|
||||
artifactor := artifacts.NewArtifactor(*artifactsDir, artifactsExtra)
|
||||
artifactor := artifact.NewProcessor(*artifactsDir, artifactsExtra)
|
||||
|
||||
ctx := context.Background()
|
||||
exe, err := newExecution(
|
||||
@ -179,10 +179,10 @@ func Run() {
|
||||
if err := sen.Run(); err != nil {
|
||||
exe.PubEvent(event.Error, err.Error())
|
||||
log.WithError(err).Error("sensor: run finished with error")
|
||||
if errors.Is(err, monitors.ErrInsufficientPermissions) {
|
||||
if errors.Is(err, monitor.ErrInsufficientPermissions) {
|
||||
log.Info("sensor: Instrumented containers require root and ALL capabilities enabled. Example: `docker run --user root --cap-add ALL app:v1-instrumented`")
|
||||
}
|
||||
if errors.Is(err, monitors.ErrInsufficientPermissions) {
|
||||
if errors.Is(err, monitor.ErrInsufficientPermissions) {
|
||||
}
|
||||
} else {
|
||||
log.Info("sensor: run finished succesfully")
|
||||
@ -229,7 +229,7 @@ func newSensor(
|
||||
ctx context.Context,
|
||||
exe execution.Interface,
|
||||
mode string,
|
||||
artifactor artifacts.Artifactor,
|
||||
artifactor artifact.Processor,
|
||||
) (sensor, error) {
|
||||
workDir, err := os.Getwd()
|
||||
errutil.WarnOn(err)
|
||||
@ -252,7 +252,7 @@ func newSensor(
|
||||
return controlled.NewSensor(
|
||||
ctx,
|
||||
exe,
|
||||
monitors.NewCompositeMonitor,
|
||||
monitor.NewCompositeMonitor,
|
||||
artifactor,
|
||||
workDir,
|
||||
mountPoint,
|
||||
@ -261,7 +261,7 @@ func newSensor(
|
||||
return standalone.NewSensor(
|
||||
ctx,
|
||||
exe,
|
||||
monitors.NewCompositeMonitor,
|
||||
monitor.NewCompositeMonitor,
|
||||
artifactor,
|
||||
workDir,
|
||||
mountPoint,
|
||||
|
@ -1,7 +1,7 @@
|
||||
//go:build linux
|
||||
// +build linux
|
||||
|
||||
package artifacts
|
||||
package artifact
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
@ -23,8 +23,8 @@ import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/detectors/binfile"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/inspectors/sodeps"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/detector/binfile"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/inspector/sodeps"
|
||||
"github.com/docker-slim/docker-slim/pkg/artifact"
|
||||
"github.com/docker-slim/docker-slim/pkg/certdiscover"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/command"
|
||||
@ -256,7 +256,7 @@ func findFileTypeCmd() {
|
||||
}
|
||||
|
||||
// Needed mostly to be able to mock it in the sensor tests.
|
||||
type Artifactor interface {
|
||||
type Processor interface {
|
||||
// Current location of the artifacts folder.
|
||||
ArtifactsDir() string
|
||||
|
||||
@ -281,7 +281,7 @@ type Artifactor interface {
|
||||
Archive() error
|
||||
}
|
||||
|
||||
type artifactor struct {
|
||||
type processor struct {
|
||||
artifactsDirName string
|
||||
|
||||
// Extra files to put into the artifacts archive before exiting.
|
||||
@ -290,33 +290,37 @@ type artifactor struct {
|
||||
origPathMap map[string]struct{}
|
||||
}
|
||||
|
||||
func NewArtifactor(artifactsDirName string, artifactsExtra []string) Artifactor {
|
||||
return &artifactor{
|
||||
func NewProcessor(artifactsDirName string, artifactsExtra []string) Processor {
|
||||
return &processor{
|
||||
artifactsDirName: artifactsDirName,
|
||||
artifactsExtra: artifactsExtra,
|
||||
}
|
||||
}
|
||||
|
||||
func (a *artifactor) ArtifactsDir() string {
|
||||
func (a *processor) ArtifactsDir() string {
|
||||
return a.artifactsDirName
|
||||
}
|
||||
|
||||
func (a *artifactor) GetCurrentPaths(root string, excludes []string) (map[string]struct{}, error) {
|
||||
func (a *processor) GetCurrentPaths(root string, excludes []string) (map[string]struct{}, error) {
|
||||
logger := log.WithField("op", "processor.GetCurrentPaths")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
pathMap := map[string]struct{}{}
|
||||
err := filepath.Walk(root,
|
||||
func(pth string, info os.FileInfo, err error) error {
|
||||
if strings.HasPrefix(pth, "/proc/") {
|
||||
log.Debugf("sensor: getCurrentPaths() - skipping /proc file system objects...")
|
||||
logger.Debugf("skipping /proc file system objects... - '%s'", pth)
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
if strings.HasPrefix(pth, "/sys/") {
|
||||
log.Debugf("sensor: getCurrentPaths() - skipping /sys file system objects...")
|
||||
logger.Debugf("skipping /sys file system objects... - '%s'", pth)
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
if strings.HasPrefix(pth, "/dev/") {
|
||||
log.Debugf("sensor: getCurrentPaths() - skipping /dev file system objects...")
|
||||
logger.Debugf("skipping /dev file system objects... - '%s'", pth)
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
@ -333,7 +337,7 @@ func (a *artifactor) GetCurrentPaths(root string, excludes []string) (map[string
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Debugf("sensor: getCurrentPaths() - skipping %s with error: %v", pth, err)
|
||||
logger.Debugf("skipping %s with error: %v", pth, err)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -365,51 +369,53 @@ func (a *artifactor) GetCurrentPaths(root string, excludes []string) (map[string
|
||||
return pathMap, nil
|
||||
}
|
||||
|
||||
func (a *artifactor) PrepareEnv(cmd *command.StartMonitor) error {
|
||||
log.Debug("sensor.app.prepareEnv()")
|
||||
func (a *processor) PrepareEnv(cmd *command.StartMonitor) error {
|
||||
logger := log.WithField("op", "processor.PrepareEnv")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
dstRootPath := filepath.Join(a.artifactsDirName, app.ArtifactFilesDirName)
|
||||
log.Debugf("sensor.app.prepareEnv - prep file artifacts root dir - '%s'", dstRootPath)
|
||||
logger.Debugf("prep file artifacts root dir - '%s'", dstRootPath)
|
||||
if err := os.MkdirAll(dstRootPath, 0777); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if cmd != nil && len(cmd.Preserves) > 0 {
|
||||
log.Debugf("sensor.app.prepareEnv(): preserving paths - %d", len(cmd.Preserves))
|
||||
logger.Debugf("preserving paths - %d", len(cmd.Preserves))
|
||||
|
||||
preservedDirPath := filepath.Join(a.artifactsDirName, preservedDirName)
|
||||
log.Debugf("sensor.app.prepareEnv - prep preserved artifacts root dir - '%s'", preservedDirPath)
|
||||
logger.Debugf("prep preserved artifacts root dir - '%s'", preservedDirPath)
|
||||
if err := os.MkdirAll(preservedDirPath, 0777); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
preservePaths := preparePaths(getKeys(cmd.Preserves))
|
||||
log.Debugf("sensor.app.prepareEnv - preservePaths(%v): %+v", len(preservePaths), preservePaths)
|
||||
logger.Debugf("preservePaths(%v): %+v", len(preservePaths), preservePaths)
|
||||
|
||||
newPerms := getRecordsWithPerms(cmd.Preserves)
|
||||
log.Debugf("sensor.app.prepareEnv - newPerms(%v): %+v", len(newPerms), newPerms)
|
||||
logger.Debugf("newPerms(%v): %+v", len(newPerms), newPerms)
|
||||
|
||||
for inPath, isDir := range preservePaths {
|
||||
if artifact.IsFilteredPath(inPath) {
|
||||
log.Debugf("sensor.app.prepareEnv(): skipping filtered path [isDir=%v] %s", isDir, inPath)
|
||||
logger.Debugf("skipping filtered path [isDir=%v] %s", isDir, inPath)
|
||||
continue
|
||||
}
|
||||
|
||||
dstPath := fmt.Sprintf("%s%s", preservedDirPath, inPath)
|
||||
log.Debugf("sensor.app.prepareEnv(): [isDir=%v] %s", isDir, dstPath)
|
||||
logger.Debugf("[isDir=%v] %s", isDir, dstPath)
|
||||
|
||||
if isDir {
|
||||
err, errs := fsutil.CopyDir(cmd.KeepPerms, inPath, dstPath, true, true, nil, nil, nil)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.app.prepareEnv.CopyDir(%v,%v) error: %v", inPath, dstPath, err)
|
||||
logger.Debugf("fsutil.CopyDir(%v,%v) error: %v", inPath, dstPath, err)
|
||||
}
|
||||
|
||||
if len(errs) > 0 {
|
||||
log.Debugf("sensor.app.prepareEnv.CopyDir(%v,%v) copy errors: %+v", inPath, dstPath, errs)
|
||||
logger.Debugf("fsutil.CopyDir(%v,%v) copy errors: %+v", inPath, dstPath, errs)
|
||||
}
|
||||
} else {
|
||||
if err := fsutil.CopyFile(cmd.KeepPerms, inPath, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.app.prepareEnv.CopyFile(%v,%v) error: %v", inPath, dstPath, err)
|
||||
logger.Debugf("fsutil.CopyFile(%v,%v) error: %v", inPath, dstPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -418,7 +424,7 @@ func (a *artifactor) PrepareEnv(cmd *command.StartMonitor) error {
|
||||
dstPath := fmt.Sprintf("%s%s", preservedDirPath, inPath)
|
||||
if fsutil.Exists(dstPath) {
|
||||
if err := fsutil.SetAccess(dstPath, perms); err != nil {
|
||||
log.Debugf("sensor.app.prepareEnv.SetPerms(%v,%v) error: %v", dstPath, perms, err)
|
||||
logger.Debugf("fsutil.SetAccess(%v,%v) error: %v", dstPath, perms, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -427,7 +433,7 @@ func (a *artifactor) PrepareEnv(cmd *command.StartMonitor) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *artifactor) ProcessReports(
|
||||
func (a *processor) ProcessReports(
|
||||
cmd *command.StartMonitor,
|
||||
mountPoint string,
|
||||
peReport *report.PeMonitorReport,
|
||||
@ -435,8 +441,11 @@ func (a *artifactor) ProcessReports(
|
||||
ptReport *report.PtMonitorReport,
|
||||
) error {
|
||||
//TODO: when peReport is available filter file events from fanReport
|
||||
logger := log.WithField("op", "processor.ProcessReports")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
log.Debug("sensor: monitor.worker - processing data...")
|
||||
logger.Debug("processing data...")
|
||||
|
||||
fileCount := 0
|
||||
for _, processFileMap := range fanReport.ProcessFiles {
|
||||
@ -449,12 +458,12 @@ func (a *artifactor) ProcessReports(
|
||||
}
|
||||
}
|
||||
|
||||
log.Debugf("sensor: processReports(): len(fanReport.ProcessFiles)=%v / fileCount=%v", len(fanReport.ProcessFiles), fileCount)
|
||||
logger.Debugf("len(fanReport.ProcessFiles)=%v / fileCount=%v", len(fanReport.ProcessFiles), fileCount)
|
||||
allFilesMap := findSymlinks(fileList, mountPoint, cmd.Excludes)
|
||||
return saveResults(a.origPathMap, a.artifactsDirName, cmd, allFilesMap, fanReport, ptReport, peReport)
|
||||
}
|
||||
|
||||
func (a *artifactor) Archive() error {
|
||||
func (a *processor) Archive() error {
|
||||
toArchive := map[string]struct{}{}
|
||||
for _, f := range a.artifactsExtra {
|
||||
if fsutil.Exists(f) {
|
||||
@ -499,7 +508,7 @@ func saveResults(
|
||||
) error {
|
||||
log.Debugf("saveResults(%v,...)", len(fileNames))
|
||||
|
||||
artifactStore := newArtifactStore(origPathMap, artifactsDirName, fileNames, fanMonReport, ptMonReport, peReport, cmd)
|
||||
artifactStore := newStore(origPathMap, artifactsDirName, fileNames, fanMonReport, ptMonReport, peReport, cmd)
|
||||
artifactStore.prepareArtifacts()
|
||||
artifactStore.saveArtifacts()
|
||||
artifactStore.enumerateArtifacts()
|
||||
@ -507,7 +516,12 @@ func saveResults(
|
||||
return artifactStore.saveReport()
|
||||
}
|
||||
|
||||
type artifactStore struct {
|
||||
// NOTE:
|
||||
// the 'store' is supposed to only store/save/copy the artifacts we identified,
|
||||
// but overtime a lot of artifact processing and post-processing logic
|
||||
// ended up there too (which belongs in the artifact 'processor').
|
||||
// TODO: refactor 'processor' and 'store' to have the right logic in the right places
|
||||
type store struct {
|
||||
origPathMap map[string]struct{}
|
||||
storeLocation string
|
||||
fanMonReport *report.FanMonitorReport
|
||||
@ -523,15 +537,15 @@ type artifactStore struct {
|
||||
appStacks map[string]*appStackInfo
|
||||
}
|
||||
|
||||
func newArtifactStore(
|
||||
func newStore(
|
||||
origPathMap map[string]struct{},
|
||||
storeLocation string,
|
||||
rawNames map[string]*report.ArtifactProps,
|
||||
fanMonReport *report.FanMonitorReport,
|
||||
ptMonReport *report.PtMonitorReport,
|
||||
peMonReport *report.PeMonitorReport,
|
||||
cmd *command.StartMonitor) *artifactStore {
|
||||
store := &artifactStore{
|
||||
cmd *command.StartMonitor) *store {
|
||||
store := &store{
|
||||
origPathMap: origPathMap,
|
||||
storeLocation: storeLocation,
|
||||
fanMonReport: fanMonReport,
|
||||
@ -550,7 +564,7 @@ func newArtifactStore(
|
||||
return store
|
||||
}
|
||||
|
||||
func (p *artifactStore) getArtifactFlags(artifactFileName string) map[string]bool {
|
||||
func (p *store) getArtifactFlags(artifactFileName string) map[string]bool {
|
||||
flags := map[string]bool{}
|
||||
for _, processFileMap := range p.fanMonReport.ProcessFiles {
|
||||
if finfo, ok := processFileMap[artifactFileName]; ok {
|
||||
@ -575,7 +589,7 @@ func (p *artifactStore) getArtifactFlags(artifactFileName string) map[string]boo
|
||||
return flags
|
||||
}
|
||||
|
||||
func (p *artifactStore) prepareArtifact(artifactFileName string) {
|
||||
func (p *store) prepareArtifact(artifactFileName string) {
|
||||
srcLinkFileInfo, err := os.Lstat(artifactFileName)
|
||||
if err != nil {
|
||||
log.Debugf("prepareArtifact - artifact don't exist: %v (%v)", artifactFileName, os.IsNotExist(err))
|
||||
@ -664,7 +678,7 @@ func (p *artifactStore) prepareArtifact(artifactFileName string) {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *artifactStore) prepareArtifacts() {
|
||||
func (p *store) prepareArtifacts() {
|
||||
log.Debugf("p.prepareArtifacts() p.rawNames=%v", len(p.rawNames))
|
||||
|
||||
for artifactFileName := range p.rawNames {
|
||||
@ -762,7 +776,7 @@ func (p *artifactStore) prepareArtifacts() {
|
||||
p.resolveLinks()
|
||||
}
|
||||
|
||||
func (p *artifactStore) resolveLinks() {
|
||||
func (p *store) resolveLinks() {
|
||||
//note:
|
||||
//the links should be resolved in findSymlinks, but
|
||||
//the current design needs to be improved to catch all symlinks
|
||||
@ -937,38 +951,38 @@ func linkTargetToFullPath(fullPath, target string) string {
|
||||
return filepath.Clean(filepath.Join(d, target))
|
||||
}
|
||||
|
||||
func (p *artifactStore) saveWorkdir(excludePatterns []string) {
|
||||
func (p *store) saveWorkdir(excludePatterns []string) {
|
||||
if p.cmd.IncludeWorkdir == "" {
|
||||
return
|
||||
}
|
||||
|
||||
if artifact.IsFilteredPath(p.cmd.IncludeWorkdir) {
|
||||
log.Debug("sensor.artifactStore.saveWorkdir(): skipping filtered workdir")
|
||||
log.Debug("sensor.store.saveWorkdir(): skipping filtered workdir")
|
||||
return
|
||||
}
|
||||
|
||||
if !fsutil.DirExists(p.cmd.IncludeWorkdir) {
|
||||
log.Debugf("sensor.artifactStore.saveWorkdir: workdir does not exist %s", p.cmd.IncludeWorkdir)
|
||||
log.Debugf("sensor.store.saveWorkdir: workdir does not exist %s", p.cmd.IncludeWorkdir)
|
||||
return
|
||||
}
|
||||
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, p.cmd.IncludeWorkdir)
|
||||
if fsutil.Exists(dstPath) {
|
||||
log.Debug("sensor.artifactStore.saveWorkdir: workdir dst path already exists")
|
||||
log.Debug("sensor.store.saveWorkdir: workdir dst path already exists")
|
||||
//it's possible that some of the files in the work dir are already copied
|
||||
//the copy logic will improve when we copy the files separately
|
||||
//for now just copy the whole workdir
|
||||
}
|
||||
|
||||
log.Debugf("sensor.artifactStore.saveWorkdir: workdir=%s", p.cmd.IncludeWorkdir)
|
||||
log.Debugf("sensor.store.saveWorkdir: workdir=%s", p.cmd.IncludeWorkdir)
|
||||
|
||||
err, errs := fsutil.CopyDir(p.cmd.KeepPerms, p.cmd.IncludeWorkdir, dstPath, true, true, excludePatterns, nil, nil)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveWorkdir: CopyDir(%v,%v) error: %v", p.cmd.IncludeWorkdir, dstPath, err)
|
||||
log.Debugf("sensor.store.saveWorkdir: CopyDir(%v,%v) error: %v", p.cmd.IncludeWorkdir, dstPath, err)
|
||||
}
|
||||
|
||||
if len(errs) > 0 {
|
||||
log.Debugf("sensor.artifactStore.saveWorkdir: CopyDir(%v,%v) copy errors: %+v", p.cmd.IncludeWorkdir, dstPath, errs)
|
||||
log.Debugf("sensor.store.saveWorkdir: CopyDir(%v,%v) copy errors: %+v", p.cmd.IncludeWorkdir, dstPath, errs)
|
||||
}
|
||||
|
||||
//todo:
|
||||
@ -996,7 +1010,7 @@ var osLibsNetFiles = []string{
|
||||
osLibHostConf,
|
||||
}
|
||||
|
||||
func (p *artifactStore) saveOSLibsNetwork() {
|
||||
func (p *store) saveOSLibsNetwork() {
|
||||
if !p.cmd.IncludeOSLibsNet {
|
||||
return
|
||||
}
|
||||
@ -1006,19 +1020,19 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
continue
|
||||
}
|
||||
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: copy %s", fp)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: copy %s", fp)
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, fp)
|
||||
if fsutil.Exists(dstPath) {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := fsutil.CopyFile(p.cmd.KeepPerms, fp, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: fsutil.CopyFile(%v,%v) error - %v", fp, dstPath, err)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: fsutil.CopyFile(%v,%v) error - %v", fp, dstPath, err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(p.origPathMap) == 0 {
|
||||
log.Debug("sensor.artifactStore.saveOSLibsNetwork: no origPathMap")
|
||||
log.Debug("sensor.store.saveOSLibsNetwork: no origPathMap")
|
||||
return
|
||||
}
|
||||
|
||||
@ -1031,7 +1045,7 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
strings.Contains(fileName, osUsrLibDir) ||
|
||||
strings.Contains(fileName, osUsrLib64Dir)) &&
|
||||
strings.Contains(fileName, osLibSO) {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: match - %s", fileName)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: match - %s", fileName)
|
||||
pathMap[fileName] = struct{}{}
|
||||
}
|
||||
}
|
||||
@ -1044,7 +1058,7 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
|
||||
fpaths, err := resloveLink(fpath)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: error resolving link - %s", fpath)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: error resolving link - %s", fpath)
|
||||
continue
|
||||
}
|
||||
|
||||
@ -1063,9 +1077,9 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
binArtifacts, err := sodeps.AllDependencies(fp)
|
||||
if err != nil {
|
||||
if err == sodeps.ErrDepResolverNotFound {
|
||||
log.Debug("sensor.artifactStore.saveOSLibsNetwork[bsa] - no static bin dep resolver")
|
||||
log.Debug("sensor.store.saveOSLibsNetwork[bsa] - no static bin dep resolver")
|
||||
} else {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork[bsa] - %v - error getting bin artifacts => %v\n", fp, err)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork[bsa] - %v - error getting bin artifacts => %v\n", fp, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
@ -1073,7 +1087,7 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
for _, bpath := range binArtifacts {
|
||||
bfpaths, err := resloveLink(bpath)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: error resolving link - %s", bpath)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: error resolving link - %s", bpath)
|
||||
continue
|
||||
}
|
||||
|
||||
@ -1092,7 +1106,7 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
}
|
||||
}
|
||||
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: - allPathMap(%v) = %+v", len(allPathMap), allPathMap)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: - allPathMap(%v) = %+v", len(allPathMap), allPathMap)
|
||||
for fp := range allPathMap {
|
||||
if !fsutil.Exists(fp) {
|
||||
continue
|
||||
@ -1104,7 +1118,7 @@ func (p *artifactStore) saveOSLibsNetwork() {
|
||||
}
|
||||
|
||||
if err := fsutil.CopyFile(p.cmd.KeepPerms, fp, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveOSLibsNetwork: fsutil.CopyFile(%v,%v) error - %v", fp, dstPath, err)
|
||||
log.Debugf("sensor.store.saveOSLibsNetwork: fsutil.CopyFile(%v,%v) error - %v", fp, dstPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1148,21 +1162,21 @@ func resloveLink(fpath string) ([]string, error) {
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (p *artifactStore) saveCertsData() {
|
||||
func (p *store) saveCertsData() {
|
||||
copyCertFiles := func(list []string) {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyCertFiles(list=%+v)", list)
|
||||
log.Debugf("sensor.store.saveCertsData.copyCertFiles(list=%+v)", list)
|
||||
for _, fname := range list {
|
||||
if fsutil.Exists(fname) {
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, fname)
|
||||
if err := fsutil.CopyFile(p.cmd.KeepPerms, fname, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyCertFiles: fsutil.CopyFile(%v,%v) error - %v", fname, dstPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyCertFiles: fsutil.CopyFile(%v,%v) error - %v", fname, dstPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
copyDirs := func(list []string, copyLinkTargets bool) {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs(list=%+v,copyLinkTargets=%v)", list, copyLinkTargets)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs(list=%+v,copyLinkTargets=%v)", list, copyLinkTargets)
|
||||
for _, fname := range list {
|
||||
if fsutil.Exists(fname) {
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, fname)
|
||||
@ -1170,52 +1184,52 @@ func (p *artifactStore) saveCertsData() {
|
||||
if fsutil.IsDir(fname) {
|
||||
err, errs := fsutil.CopyDir(p.cmd.KeepPerms, fname, dstPath, true, true, nil, nil, nil)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: fsutil.CopyDir(%v,%v) error: %v", fname, dstPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: fsutil.CopyDir(%v,%v) error: %v", fname, dstPath, err)
|
||||
} else if copyLinkTargets {
|
||||
foList, err := ioutil.ReadDir(fname)
|
||||
if err == nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs(): dir=%v fcount=%v", fname, len(foList))
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs(): dir=%v fcount=%v", fname, len(foList))
|
||||
for _, fo := range foList {
|
||||
fullPath := filepath.Join(fname, fo.Name())
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs(): dir=%v fullPath=%v", fname, fullPath)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs(): dir=%v fullPath=%v", fname, fullPath)
|
||||
if fsutil.IsSymlink(fullPath) {
|
||||
linkRef, err := os.Readlink(fullPath)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: os.Readlink(%v) error - %v", fullPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: os.Readlink(%v) error - %v", fullPath, err)
|
||||
continue
|
||||
}
|
||||
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs(): dir=%v fullPath=%v linkRef=%v",
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs(): dir=%v fullPath=%v linkRef=%v",
|
||||
fname, fullPath, linkRef)
|
||||
if strings.Contains(linkRef, "/") {
|
||||
targetFilePath := linkTargetToFullPath(fullPath, linkRef)
|
||||
if targetFilePath != "" && fsutil.Exists(targetFilePath) {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs(): dir=%v fullPath=%v linkRef=%v targetFilePath=%v",
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs(): dir=%v fullPath=%v linkRef=%v targetFilePath=%v",
|
||||
fname, fullPath, linkRef, targetFilePath)
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, targetFilePath)
|
||||
if err := fsutil.CopyFile(p.cmd.KeepPerms, targetFilePath, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: fsutil.CopyFile(%v,%v) error - %v", targetFilePath, dstPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: fsutil.CopyFile(%v,%v) error - %v", targetFilePath, dstPath, err)
|
||||
}
|
||||
} else {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: targetFilePath does not exist - %v", targetFilePath)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: targetFilePath does not exist - %v", targetFilePath)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: os.ReadDir(%v) error - %v", fname, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: os.ReadDir(%v) error - %v", fname, err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) > 0 {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: fsutil.CopyDir(%v,%v) copy errors: %+v", fname, dstPath, errs)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: fsutil.CopyDir(%v,%v) copy errors: %+v", fname, dstPath, errs)
|
||||
}
|
||||
} else if fsutil.IsSymlink(fname) {
|
||||
if err := fsutil.CopySymlinkFile(p.cmd.KeepPerms, fname, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyDirs: fsutil.CopySymlinkFile(%v,%v) error - %v", fname, dstPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyDirs: fsutil.CopySymlinkFile(%v,%v) error - %v", fname, dstPath, err)
|
||||
}
|
||||
} else {
|
||||
log.Debugf("artifactStore.saveCertsData.copyDir: unexpected obect type - %s", fname)
|
||||
log.Debugf("store.saveCertsData.copyDir: unexpected obect type - %s", fname)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1223,13 +1237,13 @@ func (p *artifactStore) saveCertsData() {
|
||||
|
||||
copyAppCertFiles := func(suffix string, dirs []string, subdirPrefix string) {
|
||||
//NOTE: dirs end with "/" (need to revisit the formatting to make it consistent)
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyAppCertFiles(suffix=%v,dirs=%+v,subdirPrefix=%v)",
|
||||
log.Debugf("sensor.store.saveCertsData.copyAppCertFiles(suffix=%v,dirs=%+v,subdirPrefix=%v)",
|
||||
suffix, dirs, subdirPrefix)
|
||||
for _, dirName := range dirs {
|
||||
if subdirPrefix != "" {
|
||||
foList, err := ioutil.ReadDir(dirName)
|
||||
if err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyAppCertFiles: os.ReadDir(%v) error - %v", dirName, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyAppCertFiles: os.ReadDir(%v) error - %v", dirName, err)
|
||||
continue
|
||||
}
|
||||
|
||||
@ -1245,7 +1259,7 @@ func (p *artifactStore) saveCertsData() {
|
||||
if fsutil.Exists(srcFilePath) {
|
||||
dstPath := fmt.Sprintf("%s/files%s", p.storeLocation, srcFilePath)
|
||||
if err := fsutil.CopyFile(p.cmd.KeepPerms, srcFilePath, dstPath, true); err != nil {
|
||||
log.Debugf("sensor.artifactStore.saveCertsData.copyAppCertFiles: fsutil.CopyFile(%v,%v) error - %v", srcFilePath, dstPath, err)
|
||||
log.Debugf("sensor.store.saveCertsData.copyAppCertFiles: fsutil.CopyFile(%v,%v) error - %v", srcFilePath, dstPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1329,7 +1343,7 @@ func (p *artifactStore) saveCertsData() {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *artifactStore) saveArtifacts() {
|
||||
func (p *store) saveArtifacts() {
|
||||
var includePaths map[string]bool
|
||||
var newPerms map[string]*fsutil.AccessInfo
|
||||
|
||||
@ -2019,7 +2033,7 @@ copyBinIncludes:
|
||||
}
|
||||
}
|
||||
|
||||
func (p *artifactStore) detectAppStack(fileName string) {
|
||||
func (p *store) detectAppStack(fileName string) {
|
||||
isPython := detectPythonCodeFile(fileName)
|
||||
if isPython {
|
||||
appStack, ok := p.appStacks[certdiscover.LanguagePython]
|
||||
@ -2193,10 +2207,14 @@ func detectNodePkgDir(fileName string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *artifactStore) archiveArtifacts() error {
|
||||
func (p *store) archiveArtifacts() error {
|
||||
logger := log.WithField("op", "store.archiveArtifacts")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
src := filepath.Join(p.storeLocation, app.ArtifactFilesDirName)
|
||||
dst := filepath.Join(p.storeLocation, filesArchiveName)
|
||||
log.Debugf("artifactStore.archiveArtifacts: src='%s' dst='%s'", src, dst)
|
||||
logger.Debugf("src='%s' dst='%s'", src, dst)
|
||||
|
||||
trimPrefix := fmt.Sprintf("%s/", src)
|
||||
return fsutil.ArchiveDir(dst, src, trimPrefix, "")
|
||||
@ -2205,7 +2223,11 @@ func (p *artifactStore) archiveArtifacts() error {
|
||||
// Go over all saved artifacts and update the name list to make
|
||||
// sure all the files & folders are reflected in the final report.
|
||||
// Hopefully, just a temporary workaround until a proper refactoring.
|
||||
func (p *artifactStore) enumerateArtifacts() {
|
||||
func (p *store) enumerateArtifacts() {
|
||||
logger := log.WithField("op", "store.enumerateArtifacts")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
knownFiles := list2map(p.nameList)
|
||||
artifactFilesDir := filepath.Join(p.storeLocation, app.ArtifactFilesDirName)
|
||||
|
||||
@ -2216,7 +2238,7 @@ func (p *artifactStore) enumerateArtifacts() {
|
||||
|
||||
entries, err := os.ReadDir(curpath)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("artifactStore.enumerateArtifacts: readdir error")
|
||||
logger.WithError(err).Debugf("os.ReadDir(%s)", curpath)
|
||||
// Keep processing though since it might have been a partial result.
|
||||
}
|
||||
|
||||
@ -2234,10 +2256,9 @@ func (p *artifactStore) enumerateArtifacts() {
|
||||
p.rawNames[curpath] = props
|
||||
knownFiles[curpath] = true
|
||||
} else {
|
||||
log.
|
||||
WithError(err).
|
||||
logger.WithError(err).
|
||||
WithField("path", curpath).
|
||||
Debug("artifactStore.enumerateArtifacts: failed computing dir artifact props")
|
||||
Debugf("artifactProps(%s): failed computing dir artifact props", curpath)
|
||||
}
|
||||
continue
|
||||
}
|
||||
@ -2262,16 +2283,19 @@ func (p *artifactStore) enumerateArtifacts() {
|
||||
p.rawNames[childpath] = props
|
||||
knownFiles[childpath] = true
|
||||
} else {
|
||||
log.
|
||||
WithError(err).
|
||||
logger.WithError(err).
|
||||
WithField("path", childpath).
|
||||
Debug("artifactStore.enumerateArtifacts: failed computing artifact props")
|
||||
Debugf("artifactProps(%s): failed computing artifact props", childpath)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *artifactStore) saveReport() error {
|
||||
func (p *store) saveReport() error {
|
||||
logger := log.WithField("op", "store.saveReport")
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
creport := report.ContainerReport{
|
||||
SensorVersion: version.Current(),
|
||||
Monitors: report.MonitorReports{
|
||||
@ -2293,7 +2317,12 @@ func (p *artifactStore) saveReport() error {
|
||||
|
||||
sort.Strings(p.nameList)
|
||||
for _, fname := range p.nameList {
|
||||
creport.Image.Files = append(creport.Image.Files, p.rawNames[fname])
|
||||
rawNameRecord, found := p.rawNames[fname]
|
||||
if found {
|
||||
creport.Image.Files = append(creport.Image.Files, rawNameRecord)
|
||||
} else {
|
||||
logger.Debugf("nameList file name (%s) not found in rawNames map", fname)
|
||||
}
|
||||
}
|
||||
|
||||
_, err := os.Stat(p.storeLocation)
|
||||
@ -2305,7 +2334,7 @@ func (p *artifactStore) saveReport() error {
|
||||
}
|
||||
|
||||
reportFilePath := filepath.Join(p.storeLocation, report.DefaultContainerReportFileName)
|
||||
log.Debugf("sensor: monitor - saving report to '%s'", reportFilePath)
|
||||
logger.Debugf("saving report to '%s'", reportFilePath)
|
||||
|
||||
var reportData bytes.Buffer
|
||||
encoder := json.NewEncoder(&reportData)
|
@ -9,9 +9,9 @@ import (
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifacts"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifact"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/execution"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/command"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/event"
|
||||
)
|
||||
@ -22,8 +22,8 @@ type Sensor struct {
|
||||
ctx context.Context
|
||||
exe execution.Interface
|
||||
|
||||
newMonitor monitors.NewCompositeMonitorFunc
|
||||
artifactor artifacts.Artifactor
|
||||
newMonitor monitor.NewCompositeMonitorFunc
|
||||
artifactor artifact.Processor
|
||||
|
||||
workDir string
|
||||
mountPoint string
|
||||
@ -32,8 +32,8 @@ type Sensor struct {
|
||||
func NewSensor(
|
||||
ctx context.Context,
|
||||
exe execution.Interface,
|
||||
newMonitor monitors.NewCompositeMonitorFunc,
|
||||
artifactor artifacts.Artifactor,
|
||||
newMonitor monitor.NewCompositeMonitorFunc,
|
||||
artifactor artifact.Processor,
|
||||
workDir string,
|
||||
mountPoint string,
|
||||
) *Sensor {
|
||||
@ -86,7 +86,7 @@ func (s *Sensor) Run() error {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Sensor) runWithoutMonitor() (monitors.CompositeMonitor, error) {
|
||||
func (s *Sensor) runWithoutMonitor() (monitor.CompositeMonitor, error) {
|
||||
for {
|
||||
select {
|
||||
case cmd := <-s.exe.Commands():
|
||||
@ -109,7 +109,7 @@ func (s *Sensor) runWithoutMonitor() (monitors.CompositeMonitor, error) {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Sensor) startMonitor(cmd *command.StartMonitor) (monitors.CompositeMonitor, error) {
|
||||
func (s *Sensor) startMonitor(cmd *command.StartMonitor) (monitor.CompositeMonitor, error) {
|
||||
if err := s.artifactor.PrepareEnv(cmd); err != nil {
|
||||
log.WithError(err).Error("sensor: artifactor.PrepareEnv() failed")
|
||||
return nil, fmt.Errorf("failed to prepare artifacts env: %w", err)
|
||||
@ -149,7 +149,7 @@ func (s *Sensor) startMonitor(cmd *command.StartMonitor) (monitors.CompositeMoni
|
||||
return mon, nil
|
||||
}
|
||||
|
||||
func (s *Sensor) runWithMonitor(mon monitors.CompositeMonitor) error {
|
||||
func (s *Sensor) runWithMonitor(mon monitor.CompositeMonitor) error {
|
||||
log.Debug("sensor: monitor.worker - waiting to stop monitoring...")
|
||||
log.Debug("sensor: error collector - waiting for errors...")
|
||||
|
||||
@ -179,7 +179,7 @@ loop:
|
||||
|
||||
case err := <-mon.Errors():
|
||||
log.WithError(err).Warn("sensor: non-critical monitor error condition")
|
||||
s.exe.PubEvent(event.Error, monitors.NonCriticalError(err).Error())
|
||||
s.exe.PubEvent(event.Error, monitor.NonCriticalError(err).Error())
|
||||
|
||||
case <-time.After(time.Second * 5):
|
||||
log.Debug(".")
|
||||
@ -197,13 +197,13 @@ loop:
|
||||
return s.processMonitoringResults(mon)
|
||||
}
|
||||
|
||||
func (s *Sensor) processMonitoringResults(mon monitors.CompositeMonitor) error {
|
||||
func (s *Sensor) processMonitoringResults(mon monitor.CompositeMonitor) error {
|
||||
// A bit of code duplication to avoid starting a goroutine
|
||||
// for error event handling - keeping the control flow
|
||||
// "single-threaded" keeps reasoning about the logic.
|
||||
for _, err := range mon.DrainErrors() {
|
||||
log.WithError(err).Warn("sensor: non-critical monitor error condition (drained)")
|
||||
s.exe.PubEvent(event.Error, monitors.NonCriticalError(err).Error())
|
||||
s.exe.PubEvent(event.Error, monitor.NonCriticalError(err).Error())
|
||||
}
|
||||
|
||||
log.Info("sensor: composite monitor is done, checking status...")
|
||||
@ -221,7 +221,7 @@ func (s *Sensor) processMonitoringResults(mon monitors.CompositeMonitor) error {
|
||||
report.FanReport,
|
||||
report.PtReport,
|
||||
); err != nil {
|
||||
log.WithError(err).Error("sensor: artifacts.ProcessReports() failed")
|
||||
log.WithError(err).Error("sensor: artifact.ProcessReports() failed")
|
||||
return fmt.Errorf("saving reports failed: %w", err)
|
||||
}
|
||||
return nil // Clean exit
|
||||
|
@ -7,11 +7,11 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifacts"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifact"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/controlled"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors/fanotify"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors/ptrace"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor/fanotify"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor/ptrace"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/command"
|
||||
"github.com/docker-slim/docker-slim/pkg/report"
|
||||
"github.com/docker-slim/docker-slim/pkg/test/stub/sensor/execution"
|
||||
@ -23,7 +23,7 @@ func newStubMonitorFunc(
|
||||
ctx context.Context,
|
||||
fanMon fanotify.Monitor,
|
||||
ptMon ptrace.Monitor,
|
||||
) monitors.NewCompositeMonitorFunc {
|
||||
) monitor.NewCompositeMonitorFunc {
|
||||
if fanMon == nil {
|
||||
fanMon = stubmonitor.NewFanMonitor(ctx)
|
||||
}
|
||||
@ -39,8 +39,8 @@ func newStubMonitorFunc(
|
||||
mountPoint string,
|
||||
origPaths map[string]struct{},
|
||||
signalCh <-chan os.Signal,
|
||||
) (monitors.CompositeMonitor, error) {
|
||||
return monitors.Compose(
|
||||
) (monitor.CompositeMonitor, error) {
|
||||
return monitor.Compose(
|
||||
cmd,
|
||||
fanMon,
|
||||
ptMon,
|
||||
@ -51,7 +51,7 @@ func newStubMonitorFunc(
|
||||
|
||||
type artifactorStub struct{}
|
||||
|
||||
var _ artifacts.Artifactor = &artifactorStub{}
|
||||
var _ artifact.Processor = &artifactorStub{}
|
||||
|
||||
func (a *artifactorStub) ArtifactsDir() string {
|
||||
return ""
|
||||
|
@ -8,7 +8,7 @@ import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/detectors/binfile"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/detector/binfile"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
@ -1,4 +1,4 @@
|
||||
package monitors
|
||||
package monitor
|
||||
|
||||
import (
|
||||
"context"
|
||||
@ -13,8 +13,8 @@ import (
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors/fanotify"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors/ptrace"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor/fanotify"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor/ptrace"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/command"
|
||||
"github.com/docker-slim/docker-slim/pkg/report"
|
||||
"github.com/docker-slim/docker-slim/pkg/util/errutil"
|
@ -1,4 +1,4 @@
|
||||
package monitors
|
||||
package monitor
|
||||
|
||||
import (
|
||||
"context"
|
@ -10,9 +10,9 @@ import (
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifacts"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/artifact"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/execution"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitors"
|
||||
"github.com/docker-slim/docker-slim/pkg/app/sensor/monitor"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/command"
|
||||
"github.com/docker-slim/docker-slim/pkg/ipc/event"
|
||||
)
|
||||
@ -25,8 +25,8 @@ type Sensor struct {
|
||||
ctx context.Context
|
||||
exe execution.Interface
|
||||
|
||||
newMonitor monitors.NewCompositeMonitorFunc
|
||||
artifactor artifacts.Artifactor
|
||||
newMonitor monitor.NewCompositeMonitorFunc
|
||||
artifactor artifact.Processor
|
||||
|
||||
workDir string
|
||||
mountPoint string
|
||||
@ -38,8 +38,8 @@ type Sensor struct {
|
||||
func NewSensor(
|
||||
ctx context.Context,
|
||||
exe execution.Interface,
|
||||
newMonitor monitors.NewCompositeMonitorFunc,
|
||||
artifactor artifacts.Artifactor,
|
||||
newMonitor monitor.NewCompositeMonitorFunc,
|
||||
artifactor artifact.Processor,
|
||||
workDir string,
|
||||
mountPoint string,
|
||||
stopSignal os.Signal,
|
||||
@ -125,7 +125,7 @@ func (s *Sensor) Run() error {
|
||||
report.FanReport,
|
||||
report.PtReport,
|
||||
); err != nil {
|
||||
log.WithError(err).Error("sensor: artifacts.ProcessReports() failed")
|
||||
log.WithError(err).Error("sensor: artifact.ProcessReports() failed")
|
||||
return fmt.Errorf("saving reports failed: %w", err)
|
||||
}
|
||||
|
||||
@ -133,7 +133,7 @@ func (s *Sensor) Run() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Sensor) runMonitor(mon monitors.CompositeMonitor) {
|
||||
func (s *Sensor) runMonitor(mon monitor.CompositeMonitor) {
|
||||
loop:
|
||||
for {
|
||||
select {
|
||||
@ -142,7 +142,7 @@ loop:
|
||||
|
||||
case err := <-mon.Errors():
|
||||
log.WithError(err).Warn("sensor: non-critical monitor error condition")
|
||||
s.exe.PubEvent(event.Error, monitors.NonCriticalError(err).Error())
|
||||
s.exe.PubEvent(event.Error, monitor.NonCriticalError(err).Error())
|
||||
|
||||
case <-time.After(time.Second * 5):
|
||||
log.Debug(".")
|
||||
@ -158,7 +158,7 @@ loop:
|
||||
// "single-threaded" keeps reasoning about the logic.
|
||||
for _, err := range mon.DrainErrors() {
|
||||
log.WithError(err).Warn("sensor: non-critical monitor error condition (drained)")
|
||||
s.exe.PubEvent(event.Error, monitors.NonCriticalError(err).Error())
|
||||
s.exe.PubEvent(event.Error, monitor.NonCriticalError(err).Error())
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8,6 +8,7 @@ const (
|
||||
Lint Type = "lint"
|
||||
Containerize Type = "containerize"
|
||||
Convert Type = "convert"
|
||||
Merge Type = "merge"
|
||||
Edit Type = "edit"
|
||||
Debug Type = "debug"
|
||||
Probe Type = "probe"
|
||||
|
@ -27,7 +27,7 @@ var (
|
||||
const (
|
||||
volumeMountPat = "%s:/data"
|
||||
volumeBasePath = "/data"
|
||||
emptyImageName = "docker-slim-empty-image"
|
||||
emptyImageName = "docker-slim-empty-image:latest"
|
||||
emptyImageDockerfile = "FROM scratch\nCMD\n"
|
||||
)
|
||||
|
||||
@ -190,6 +190,7 @@ func ListImages(dclient *dockerapi.Client, imageNameFilter string) (map[string]B
|
||||
}
|
||||
|
||||
func BuildEmptyImage(dclient *dockerapi.Client) error {
|
||||
//TODO: use the 'internal' build engine that doesn't need Docker
|
||||
var err error
|
||||
if dclient == nil {
|
||||
unixSocketAddr := dockerclient.GetUnixSocketAddr()
|
||||
@ -232,7 +233,7 @@ func BuildEmptyImage(dclient *dockerapi.Client) error {
|
||||
ForceRmTmpContainer: true,
|
||||
}
|
||||
if err := dclient.BuildImage(buildOptions); err != nil {
|
||||
log.Errorf("dockerutil.BuildEmptyImage: dockerapi.BuildImage() error = %v", err)
|
||||
log.Errorf("dockerutil.BuildEmptyImage: dockerapi.BuildImage() error = %v / output: %s", err, output.String())
|
||||
return err
|
||||
}
|
||||
|
||||
|
170
pkg/imagereader/imagereader.go
Normal file
170
pkg/imagereader/imagereader.go
Normal file
@ -0,0 +1,170 @@
|
||||
package imagereader
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/crane"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/daemon"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/docker-slim/docker-slim/pkg/imagebuilder"
|
||||
)
|
||||
|
||||
type Instance struct {
|
||||
imageName string
|
||||
nameRef name.Reference
|
||||
imageRef v1.Image
|
||||
exportedTarPath string
|
||||
imageConfig *imagebuilder.ImageConfig
|
||||
}
|
||||
|
||||
func New(imageName string) (*Instance, error) {
|
||||
logger := log.WithFields(log.Fields{
|
||||
"op": "imagereader.New",
|
||||
"image.name": imageName,
|
||||
})
|
||||
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
ref, err := name.ParseReference(imageName) //, name.WeakValidation)
|
||||
if err != nil {
|
||||
logger.WithError(err).Error("name.ParseReference")
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//TODO/FUTURE: add other image source options (not just local Docker daemon)
|
||||
//TODO/ASAP: need to pass the 'daemon' client otherwise it'll fail if the default client isn't enough
|
||||
img, err := daemon.Image(ref)
|
||||
if err != nil {
|
||||
logger.WithError(err).Error("daemon.Image")
|
||||
return nil, err
|
||||
}
|
||||
|
||||
instance := &Instance{
|
||||
imageName: imageName,
|
||||
nameRef: ref,
|
||||
imageRef: img,
|
||||
}
|
||||
|
||||
return instance, nil
|
||||
}
|
||||
|
||||
func (ref *Instance) ImageConfig() (*imagebuilder.ImageConfig, error) {
|
||||
logger := log.WithFields(log.Fields{
|
||||
"op": "imagereader.Instance.ImageConfig",
|
||||
"image.name": ref.imageName,
|
||||
})
|
||||
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
if ref.imageConfig != nil {
|
||||
return ref.imageConfig, nil
|
||||
}
|
||||
|
||||
cf, err := ref.imageRef.ConfigFile()
|
||||
if err != nil {
|
||||
logger.WithError(err).Error("v1.Image.ConfigFile")
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ref.imageConfig = &imagebuilder.ImageConfig{
|
||||
Created: cf.Created.Time,
|
||||
Author: cf.Author,
|
||||
Architecture: cf.Architecture,
|
||||
OS: cf.OS,
|
||||
OSVersion: cf.OSVersion,
|
||||
OSFeatures: cf.OSFeatures,
|
||||
Variant: cf.Variant,
|
||||
//RootFS *RootFS `json:"rootfs"` //not used building images
|
||||
//History []History `json:"history,omitempty"` //not used building images
|
||||
Container: cf.Container,
|
||||
DockerVersion: cf.DockerVersion,
|
||||
Config: imagebuilder.RunConfig{
|
||||
User: cf.Config.User,
|
||||
ExposedPorts: cf.Config.ExposedPorts,
|
||||
Env: cf.Config.Env,
|
||||
Entrypoint: cf.Config.Entrypoint,
|
||||
Cmd: cf.Config.Cmd,
|
||||
Volumes: cf.Config.Volumes,
|
||||
WorkingDir: cf.Config.WorkingDir,
|
||||
Labels: cf.Config.Labels,
|
||||
StopSignal: cf.Config.StopSignal,
|
||||
ArgsEscaped: cf.Config.ArgsEscaped,
|
||||
AttachStderr: cf.Config.AttachStderr,
|
||||
AttachStdin: cf.Config.AttachStdin,
|
||||
AttachStdout: cf.Config.AttachStdout,
|
||||
Domainname: cf.Config.Domainname,
|
||||
Hostname: cf.Config.Hostname,
|
||||
Image: cf.Config.Image,
|
||||
OnBuild: cf.Config.OnBuild,
|
||||
OpenStdin: cf.Config.OpenStdin,
|
||||
StdinOnce: cf.Config.StdinOnce,
|
||||
Tty: cf.Config.Tty,
|
||||
NetworkDisabled: cf.Config.NetworkDisabled,
|
||||
MacAddress: cf.Config.MacAddress,
|
||||
Shell: cf.Config.Shell, //??
|
||||
//Healthcheck *HealthConfig `json:"Healthcheck,omitempty"`
|
||||
},
|
||||
}
|
||||
|
||||
return ref.imageConfig, nil
|
||||
}
|
||||
|
||||
func (ref *Instance) FreeExportedFilesystem() error {
|
||||
logger := log.WithFields(log.Fields{
|
||||
"op": "imagereader.Instance.FreeExportedFilesystem",
|
||||
"image.name": ref.imageName,
|
||||
})
|
||||
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
if ref.exportedTarPath == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
err := os.Remove(ref.exportedTarPath)
|
||||
ref.exportedTarPath = ""
|
||||
return err
|
||||
}
|
||||
|
||||
func (ref *Instance) ExportFilesystem() (string, error) {
|
||||
logger := log.WithFields(log.Fields{
|
||||
"op": "imagereader.Instance.ExportFilesystem",
|
||||
"image.name": ref.imageName,
|
||||
})
|
||||
|
||||
logger.Trace("call")
|
||||
defer logger.Trace("exit")
|
||||
|
||||
if ref.exportedTarPath != "" {
|
||||
return ref.exportedTarPath, nil
|
||||
}
|
||||
|
||||
tarFile, err := os.CreateTemp("", "image-exported-fs-*.tar")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
defer tarFile.Close()
|
||||
|
||||
err = crane.Export(ref.imageRef, tarFile)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if _, err := os.Stat(tarFile.Name()); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ref.exportedTarPath = tarFile.Name()
|
||||
return ref.exportedTarPath, nil
|
||||
}
|
||||
|
||||
func (ref *Instance) ExportedTarPath() string {
|
||||
return ref.exportedTarPath
|
||||
}
|
@ -198,6 +198,17 @@ type ConvertCommand struct {
|
||||
Command
|
||||
}
|
||||
|
||||
// Output Version for 'merge'
|
||||
const OVMergeCommand = "1.0"
|
||||
|
||||
// MergeCommand is the 'merge' command report data
|
||||
type MergeCommand struct {
|
||||
Command
|
||||
FirstImage string `json:"first_image"`
|
||||
LastImage string `json:"last_image"`
|
||||
UseLastImageMetadata bool `json:"use_last_image_metadata"`
|
||||
}
|
||||
|
||||
// Output Version for 'edit'
|
||||
const OVEditCommand = "1.0"
|
||||
|
||||
@ -350,6 +361,21 @@ func NewConvertCommand(reportLocation string, containerized bool) *ConvertComman
|
||||
return cmd
|
||||
}
|
||||
|
||||
// NewMergeCommand creates a new 'edit' command report
|
||||
func NewMergeCommand(reportLocation string, containerized bool) *MergeCommand {
|
||||
cmd := &MergeCommand{
|
||||
Command: Command{
|
||||
reportLocation: reportLocation,
|
||||
Version: OVMergeCommand, //edit command 'results' version (report and artifacts)
|
||||
Type: command.Merge,
|
||||
State: command.StateUnknown,
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Command.init(containerized)
|
||||
return cmd
|
||||
}
|
||||
|
||||
// NewEditCommand creates a new 'edit' command report
|
||||
func NewEditCommand(reportLocation string, containerized bool) *EditCommand {
|
||||
cmd := &EditCommand{
|
||||
|
22
vendor/github.com/cespare/xxhash/v2/LICENSE.txt
generated
vendored
Normal file
22
vendor/github.com/cespare/xxhash/v2/LICENSE.txt
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
Copyright (c) 2016 Caleb Spare
|
||||
|
||||
MIT License
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
72
vendor/github.com/cespare/xxhash/v2/README.md
generated
vendored
Normal file
72
vendor/github.com/cespare/xxhash/v2/README.md
generated
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
# xxhash
|
||||
|
||||
[](https://pkg.go.dev/github.com/cespare/xxhash/v2)
|
||||
[](https://github.com/cespare/xxhash/actions/workflows/test.yml)
|
||||
|
||||
xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a
|
||||
high-quality hashing algorithm that is much faster than anything in the Go
|
||||
standard library.
|
||||
|
||||
This package provides a straightforward API:
|
||||
|
||||
```
|
||||
func Sum64(b []byte) uint64
|
||||
func Sum64String(s string) uint64
|
||||
type Digest struct{ ... }
|
||||
func New() *Digest
|
||||
```
|
||||
|
||||
The `Digest` type implements hash.Hash64. Its key methods are:
|
||||
|
||||
```
|
||||
func (*Digest) Write([]byte) (int, error)
|
||||
func (*Digest) WriteString(string) (int, error)
|
||||
func (*Digest) Sum64() uint64
|
||||
```
|
||||
|
||||
The package is written with optimized pure Go and also contains even faster
|
||||
assembly implementations for amd64 and arm64. If desired, the `purego` build tag
|
||||
opts into using the Go code even on those architectures.
|
||||
|
||||
[xxHash]: http://cyan4973.github.io/xxHash/
|
||||
|
||||
## Compatibility
|
||||
|
||||
This package is in a module and the latest code is in version 2 of the module.
|
||||
You need a version of Go with at least "minimal module compatibility" to use
|
||||
github.com/cespare/xxhash/v2:
|
||||
|
||||
* 1.9.7+ for Go 1.9
|
||||
* 1.10.3+ for Go 1.10
|
||||
* Go 1.11 or later
|
||||
|
||||
I recommend using the latest release of Go.
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Here are some quick benchmarks comparing the pure-Go and assembly
|
||||
implementations of Sum64.
|
||||
|
||||
| input size | purego | asm |
|
||||
| ---------- | --------- | --------- |
|
||||
| 4 B | 1.3 GB/s | 1.2 GB/s |
|
||||
| 16 B | 2.9 GB/s | 3.5 GB/s |
|
||||
| 100 B | 6.9 GB/s | 8.1 GB/s |
|
||||
| 4 KB | 11.7 GB/s | 16.7 GB/s |
|
||||
| 10 MB | 12.0 GB/s | 17.3 GB/s |
|
||||
|
||||
These numbers were generated on Ubuntu 20.04 with an Intel Xeon Platinum 8252C
|
||||
CPU using the following commands under Go 1.19.2:
|
||||
|
||||
```
|
||||
benchstat <(go test -tags purego -benchtime 500ms -count 15 -bench 'Sum64$')
|
||||
benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
|
||||
```
|
||||
|
||||
## Projects using this package
|
||||
|
||||
- [InfluxDB](https://github.com/influxdata/influxdb)
|
||||
- [Prometheus](https://github.com/prometheus/prometheus)
|
||||
- [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics)
|
||||
- [FreeCache](https://github.com/coocood/freecache)
|
||||
- [FastCache](https://github.com/VictoriaMetrics/fastcache)
|
10
vendor/github.com/cespare/xxhash/v2/testall.sh
generated
vendored
Normal file
10
vendor/github.com/cespare/xxhash/v2/testall.sh
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -eu -o pipefail
|
||||
|
||||
# Small convenience script for running the tests with various combinations of
|
||||
# arch/tags. This assumes we're running on amd64 and have qemu available.
|
||||
|
||||
go test ./...
|
||||
go test -tags purego ./...
|
||||
GOARCH=arm64 go test
|
||||
GOARCH=arm64 go test -tags purego
|
228
vendor/github.com/cespare/xxhash/v2/xxhash.go
generated
vendored
Normal file
228
vendor/github.com/cespare/xxhash/v2/xxhash.go
generated
vendored
Normal file
@ -0,0 +1,228 @@
|
||||
// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
|
||||
// at http://cyan4973.github.io/xxHash/.
|
||||
package xxhash
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"math/bits"
|
||||
)
|
||||
|
||||
const (
|
||||
prime1 uint64 = 11400714785074694791
|
||||
prime2 uint64 = 14029467366897019727
|
||||
prime3 uint64 = 1609587929392839161
|
||||
prime4 uint64 = 9650029242287828579
|
||||
prime5 uint64 = 2870177450012600261
|
||||
)
|
||||
|
||||
// Store the primes in an array as well.
|
||||
//
|
||||
// The consts are used when possible in Go code to avoid MOVs but we need a
|
||||
// contiguous array of the assembly code.
|
||||
var primes = [...]uint64{prime1, prime2, prime3, prime4, prime5}
|
||||
|
||||
// Digest implements hash.Hash64.
|
||||
type Digest struct {
|
||||
v1 uint64
|
||||
v2 uint64
|
||||
v3 uint64
|
||||
v4 uint64
|
||||
total uint64
|
||||
mem [32]byte
|
||||
n int // how much of mem is used
|
||||
}
|
||||
|
||||
// New creates a new Digest that computes the 64-bit xxHash algorithm.
|
||||
func New() *Digest {
|
||||
var d Digest
|
||||
d.Reset()
|
||||
return &d
|
||||
}
|
||||
|
||||
// Reset clears the Digest's state so that it can be reused.
|
||||
func (d *Digest) Reset() {
|
||||
d.v1 = primes[0] + prime2
|
||||
d.v2 = prime2
|
||||
d.v3 = 0
|
||||
d.v4 = -primes[0]
|
||||
d.total = 0
|
||||
d.n = 0
|
||||
}
|
||||
|
||||
// Size always returns 8 bytes.
|
||||
func (d *Digest) Size() int { return 8 }
|
||||
|
||||
// BlockSize always returns 32 bytes.
|
||||
func (d *Digest) BlockSize() int { return 32 }
|
||||
|
||||
// Write adds more data to d. It always returns len(b), nil.
|
||||
func (d *Digest) Write(b []byte) (n int, err error) {
|
||||
n = len(b)
|
||||
d.total += uint64(n)
|
||||
|
||||
memleft := d.mem[d.n&(len(d.mem)-1):]
|
||||
|
||||
if d.n+n < 32 {
|
||||
// This new data doesn't even fill the current block.
|
||||
copy(memleft, b)
|
||||
d.n += n
|
||||
return
|
||||
}
|
||||
|
||||
if d.n > 0 {
|
||||
// Finish off the partial block.
|
||||
c := copy(memleft, b)
|
||||
d.v1 = round(d.v1, u64(d.mem[0:8]))
|
||||
d.v2 = round(d.v2, u64(d.mem[8:16]))
|
||||
d.v3 = round(d.v3, u64(d.mem[16:24]))
|
||||
d.v4 = round(d.v4, u64(d.mem[24:32]))
|
||||
b = b[c:]
|
||||
d.n = 0
|
||||
}
|
||||
|
||||
if len(b) >= 32 {
|
||||
// One or more full blocks left.
|
||||
nw := writeBlocks(d, b)
|
||||
b = b[nw:]
|
||||
}
|
||||
|
||||
// Store any remaining partial block.
|
||||
copy(d.mem[:], b)
|
||||
d.n = len(b)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Sum appends the current hash to b and returns the resulting slice.
|
||||
func (d *Digest) Sum(b []byte) []byte {
|
||||
s := d.Sum64()
|
||||
return append(
|
||||
b,
|
||||
byte(s>>56),
|
||||
byte(s>>48),
|
||||
byte(s>>40),
|
||||
byte(s>>32),
|
||||
byte(s>>24),
|
||||
byte(s>>16),
|
||||
byte(s>>8),
|
||||
byte(s),
|
||||
)
|
||||
}
|
||||
|
||||
// Sum64 returns the current hash.
|
||||
func (d *Digest) Sum64() uint64 {
|
||||
var h uint64
|
||||
|
||||
if d.total >= 32 {
|
||||
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
|
||||
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
|
||||
h = mergeRound(h, v1)
|
||||
h = mergeRound(h, v2)
|
||||
h = mergeRound(h, v3)
|
||||
h = mergeRound(h, v4)
|
||||
} else {
|
||||
h = d.v3 + prime5
|
||||
}
|
||||
|
||||
h += d.total
|
||||
|
||||
b := d.mem[:d.n&(len(d.mem)-1)]
|
||||
for ; len(b) >= 8; b = b[8:] {
|
||||
k1 := round(0, u64(b[:8]))
|
||||
h ^= k1
|
||||
h = rol27(h)*prime1 + prime4
|
||||
}
|
||||
if len(b) >= 4 {
|
||||
h ^= uint64(u32(b[:4])) * prime1
|
||||
h = rol23(h)*prime2 + prime3
|
||||
b = b[4:]
|
||||
}
|
||||
for ; len(b) > 0; b = b[1:] {
|
||||
h ^= uint64(b[0]) * prime5
|
||||
h = rol11(h) * prime1
|
||||
}
|
||||
|
||||
h ^= h >> 33
|
||||
h *= prime2
|
||||
h ^= h >> 29
|
||||
h *= prime3
|
||||
h ^= h >> 32
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
const (
|
||||
magic = "xxh\x06"
|
||||
marshaledSize = len(magic) + 8*5 + 32
|
||||
)
|
||||
|
||||
// MarshalBinary implements the encoding.BinaryMarshaler interface.
|
||||
func (d *Digest) MarshalBinary() ([]byte, error) {
|
||||
b := make([]byte, 0, marshaledSize)
|
||||
b = append(b, magic...)
|
||||
b = appendUint64(b, d.v1)
|
||||
b = appendUint64(b, d.v2)
|
||||
b = appendUint64(b, d.v3)
|
||||
b = appendUint64(b, d.v4)
|
||||
b = appendUint64(b, d.total)
|
||||
b = append(b, d.mem[:d.n]...)
|
||||
b = b[:len(b)+len(d.mem)-d.n]
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
|
||||
func (d *Digest) UnmarshalBinary(b []byte) error {
|
||||
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
|
||||
return errors.New("xxhash: invalid hash state identifier")
|
||||
}
|
||||
if len(b) != marshaledSize {
|
||||
return errors.New("xxhash: invalid hash state size")
|
||||
}
|
||||
b = b[len(magic):]
|
||||
b, d.v1 = consumeUint64(b)
|
||||
b, d.v2 = consumeUint64(b)
|
||||
b, d.v3 = consumeUint64(b)
|
||||
b, d.v4 = consumeUint64(b)
|
||||
b, d.total = consumeUint64(b)
|
||||
copy(d.mem[:], b)
|
||||
d.n = int(d.total % uint64(len(d.mem)))
|
||||
return nil
|
||||
}
|
||||
|
||||
func appendUint64(b []byte, x uint64) []byte {
|
||||
var a [8]byte
|
||||
binary.LittleEndian.PutUint64(a[:], x)
|
||||
return append(b, a[:]...)
|
||||
}
|
||||
|
||||
func consumeUint64(b []byte) ([]byte, uint64) {
|
||||
x := u64(b)
|
||||
return b[8:], x
|
||||
}
|
||||
|
||||
func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
|
||||
func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
|
||||
|
||||
func round(acc, input uint64) uint64 {
|
||||
acc += input * prime2
|
||||
acc = rol31(acc)
|
||||
acc *= prime1
|
||||
return acc
|
||||
}
|
||||
|
||||
func mergeRound(acc, val uint64) uint64 {
|
||||
val = round(0, val)
|
||||
acc ^= val
|
||||
acc = acc*prime1 + prime4
|
||||
return acc
|
||||
}
|
||||
|
||||
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
|
||||
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
|
||||
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
|
||||
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
|
||||
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
|
||||
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
|
||||
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
|
||||
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }
|
209
vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
generated
vendored
Normal file
209
vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
generated
vendored
Normal file
@ -0,0 +1,209 @@
|
||||
//go:build !appengine && gc && !purego
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !purego
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// Registers:
|
||||
#define h AX
|
||||
#define d AX
|
||||
#define p SI // pointer to advance through b
|
||||
#define n DX
|
||||
#define end BX // loop end
|
||||
#define v1 R8
|
||||
#define v2 R9
|
||||
#define v3 R10
|
||||
#define v4 R11
|
||||
#define x R12
|
||||
#define prime1 R13
|
||||
#define prime2 R14
|
||||
#define prime4 DI
|
||||
|
||||
#define round(acc, x) \
|
||||
IMULQ prime2, x \
|
||||
ADDQ x, acc \
|
||||
ROLQ $31, acc \
|
||||
IMULQ prime1, acc
|
||||
|
||||
// round0 performs the operation x = round(0, x).
|
||||
#define round0(x) \
|
||||
IMULQ prime2, x \
|
||||
ROLQ $31, x \
|
||||
IMULQ prime1, x
|
||||
|
||||
// mergeRound applies a merge round on the two registers acc and x.
|
||||
// It assumes that prime1, prime2, and prime4 have been loaded.
|
||||
#define mergeRound(acc, x) \
|
||||
round0(x) \
|
||||
XORQ x, acc \
|
||||
IMULQ prime1, acc \
|
||||
ADDQ prime4, acc
|
||||
|
||||
// blockLoop processes as many 32-byte blocks as possible,
|
||||
// updating v1, v2, v3, and v4. It assumes that there is at least one block
|
||||
// to process.
|
||||
#define blockLoop() \
|
||||
loop: \
|
||||
MOVQ +0(p), x \
|
||||
round(v1, x) \
|
||||
MOVQ +8(p), x \
|
||||
round(v2, x) \
|
||||
MOVQ +16(p), x \
|
||||
round(v3, x) \
|
||||
MOVQ +24(p), x \
|
||||
round(v4, x) \
|
||||
ADDQ $32, p \
|
||||
CMPQ p, end \
|
||||
JLE loop
|
||||
|
||||
// func Sum64(b []byte) uint64
|
||||
TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
|
||||
// Load fixed primes.
|
||||
MOVQ ·primes+0(SB), prime1
|
||||
MOVQ ·primes+8(SB), prime2
|
||||
MOVQ ·primes+24(SB), prime4
|
||||
|
||||
// Load slice.
|
||||
MOVQ b_base+0(FP), p
|
||||
MOVQ b_len+8(FP), n
|
||||
LEAQ (p)(n*1), end
|
||||
|
||||
// The first loop limit will be len(b)-32.
|
||||
SUBQ $32, end
|
||||
|
||||
// Check whether we have at least one block.
|
||||
CMPQ n, $32
|
||||
JLT noBlocks
|
||||
|
||||
// Set up initial state (v1, v2, v3, v4).
|
||||
MOVQ prime1, v1
|
||||
ADDQ prime2, v1
|
||||
MOVQ prime2, v2
|
||||
XORQ v3, v3
|
||||
XORQ v4, v4
|
||||
SUBQ prime1, v4
|
||||
|
||||
blockLoop()
|
||||
|
||||
MOVQ v1, h
|
||||
ROLQ $1, h
|
||||
MOVQ v2, x
|
||||
ROLQ $7, x
|
||||
ADDQ x, h
|
||||
MOVQ v3, x
|
||||
ROLQ $12, x
|
||||
ADDQ x, h
|
||||
MOVQ v4, x
|
||||
ROLQ $18, x
|
||||
ADDQ x, h
|
||||
|
||||
mergeRound(h, v1)
|
||||
mergeRound(h, v2)
|
||||
mergeRound(h, v3)
|
||||
mergeRound(h, v4)
|
||||
|
||||
JMP afterBlocks
|
||||
|
||||
noBlocks:
|
||||
MOVQ ·primes+32(SB), h
|
||||
|
||||
afterBlocks:
|
||||
ADDQ n, h
|
||||
|
||||
ADDQ $24, end
|
||||
CMPQ p, end
|
||||
JG try4
|
||||
|
||||
loop8:
|
||||
MOVQ (p), x
|
||||
ADDQ $8, p
|
||||
round0(x)
|
||||
XORQ x, h
|
||||
ROLQ $27, h
|
||||
IMULQ prime1, h
|
||||
ADDQ prime4, h
|
||||
|
||||
CMPQ p, end
|
||||
JLE loop8
|
||||
|
||||
try4:
|
||||
ADDQ $4, end
|
||||
CMPQ p, end
|
||||
JG try1
|
||||
|
||||
MOVL (p), x
|
||||
ADDQ $4, p
|
||||
IMULQ prime1, x
|
||||
XORQ x, h
|
||||
|
||||
ROLQ $23, h
|
||||
IMULQ prime2, h
|
||||
ADDQ ·primes+16(SB), h
|
||||
|
||||
try1:
|
||||
ADDQ $4, end
|
||||
CMPQ p, end
|
||||
JGE finalize
|
||||
|
||||
loop1:
|
||||
MOVBQZX (p), x
|
||||
ADDQ $1, p
|
||||
IMULQ ·primes+32(SB), x
|
||||
XORQ x, h
|
||||
ROLQ $11, h
|
||||
IMULQ prime1, h
|
||||
|
||||
CMPQ p, end
|
||||
JL loop1
|
||||
|
||||
finalize:
|
||||
MOVQ h, x
|
||||
SHRQ $33, x
|
||||
XORQ x, h
|
||||
IMULQ prime2, h
|
||||
MOVQ h, x
|
||||
SHRQ $29, x
|
||||
XORQ x, h
|
||||
IMULQ ·primes+16(SB), h
|
||||
MOVQ h, x
|
||||
SHRQ $32, x
|
||||
XORQ x, h
|
||||
|
||||
MOVQ h, ret+24(FP)
|
||||
RET
|
||||
|
||||
// func writeBlocks(d *Digest, b []byte) int
|
||||
TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
|
||||
// Load fixed primes needed for round.
|
||||
MOVQ ·primes+0(SB), prime1
|
||||
MOVQ ·primes+8(SB), prime2
|
||||
|
||||
// Load slice.
|
||||
MOVQ b_base+8(FP), p
|
||||
MOVQ b_len+16(FP), n
|
||||
LEAQ (p)(n*1), end
|
||||
SUBQ $32, end
|
||||
|
||||
// Load vN from d.
|
||||
MOVQ s+0(FP), d
|
||||
MOVQ 0(d), v1
|
||||
MOVQ 8(d), v2
|
||||
MOVQ 16(d), v3
|
||||
MOVQ 24(d), v4
|
||||
|
||||
// We don't need to check the loop condition here; this function is
|
||||
// always called with at least one block of data to process.
|
||||
blockLoop()
|
||||
|
||||
// Copy vN back to d.
|
||||
MOVQ v1, 0(d)
|
||||
MOVQ v2, 8(d)
|
||||
MOVQ v3, 16(d)
|
||||
MOVQ v4, 24(d)
|
||||
|
||||
// The number of bytes written is p minus the old base pointer.
|
||||
SUBQ b_base+8(FP), p
|
||||
MOVQ p, ret+32(FP)
|
||||
|
||||
RET
|
183
vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
generated
vendored
Normal file
183
vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
generated
vendored
Normal file
@ -0,0 +1,183 @@
|
||||
//go:build !appengine && gc && !purego
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !purego
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// Registers:
|
||||
#define digest R1
|
||||
#define h R2 // return value
|
||||
#define p R3 // input pointer
|
||||
#define n R4 // input length
|
||||
#define nblocks R5 // n / 32
|
||||
#define prime1 R7
|
||||
#define prime2 R8
|
||||
#define prime3 R9
|
||||
#define prime4 R10
|
||||
#define prime5 R11
|
||||
#define v1 R12
|
||||
#define v2 R13
|
||||
#define v3 R14
|
||||
#define v4 R15
|
||||
#define x1 R20
|
||||
#define x2 R21
|
||||
#define x3 R22
|
||||
#define x4 R23
|
||||
|
||||
#define round(acc, x) \
|
||||
MADD prime2, acc, x, acc \
|
||||
ROR $64-31, acc \
|
||||
MUL prime1, acc
|
||||
|
||||
// round0 performs the operation x = round(0, x).
|
||||
#define round0(x) \
|
||||
MUL prime2, x \
|
||||
ROR $64-31, x \
|
||||
MUL prime1, x
|
||||
|
||||
#define mergeRound(acc, x) \
|
||||
round0(x) \
|
||||
EOR x, acc \
|
||||
MADD acc, prime4, prime1, acc
|
||||
|
||||
// blockLoop processes as many 32-byte blocks as possible,
|
||||
// updating v1, v2, v3, and v4. It assumes that n >= 32.
|
||||
#define blockLoop() \
|
||||
LSR $5, n, nblocks \
|
||||
PCALIGN $16 \
|
||||
loop: \
|
||||
LDP.P 16(p), (x1, x2) \
|
||||
LDP.P 16(p), (x3, x4) \
|
||||
round(v1, x1) \
|
||||
round(v2, x2) \
|
||||
round(v3, x3) \
|
||||
round(v4, x4) \
|
||||
SUB $1, nblocks \
|
||||
CBNZ nblocks, loop
|
||||
|
||||
// func Sum64(b []byte) uint64
|
||||
TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
|
||||
LDP b_base+0(FP), (p, n)
|
||||
|
||||
LDP ·primes+0(SB), (prime1, prime2)
|
||||
LDP ·primes+16(SB), (prime3, prime4)
|
||||
MOVD ·primes+32(SB), prime5
|
||||
|
||||
CMP $32, n
|
||||
CSEL LT, prime5, ZR, h // if n < 32 { h = prime5 } else { h = 0 }
|
||||
BLT afterLoop
|
||||
|
||||
ADD prime1, prime2, v1
|
||||
MOVD prime2, v2
|
||||
MOVD $0, v3
|
||||
NEG prime1, v4
|
||||
|
||||
blockLoop()
|
||||
|
||||
ROR $64-1, v1, x1
|
||||
ROR $64-7, v2, x2
|
||||
ADD x1, x2
|
||||
ROR $64-12, v3, x3
|
||||
ROR $64-18, v4, x4
|
||||
ADD x3, x4
|
||||
ADD x2, x4, h
|
||||
|
||||
mergeRound(h, v1)
|
||||
mergeRound(h, v2)
|
||||
mergeRound(h, v3)
|
||||
mergeRound(h, v4)
|
||||
|
||||
afterLoop:
|
||||
ADD n, h
|
||||
|
||||
TBZ $4, n, try8
|
||||
LDP.P 16(p), (x1, x2)
|
||||
|
||||
round0(x1)
|
||||
|
||||
// NOTE: here and below, sequencing the EOR after the ROR (using a
|
||||
// rotated register) is worth a small but measurable speedup for small
|
||||
// inputs.
|
||||
ROR $64-27, h
|
||||
EOR x1 @> 64-27, h, h
|
||||
MADD h, prime4, prime1, h
|
||||
|
||||
round0(x2)
|
||||
ROR $64-27, h
|
||||
EOR x2 @> 64-27, h, h
|
||||
MADD h, prime4, prime1, h
|
||||
|
||||
try8:
|
||||
TBZ $3, n, try4
|
||||
MOVD.P 8(p), x1
|
||||
|
||||
round0(x1)
|
||||
ROR $64-27, h
|
||||
EOR x1 @> 64-27, h, h
|
||||
MADD h, prime4, prime1, h
|
||||
|
||||
try4:
|
||||
TBZ $2, n, try2
|
||||
MOVWU.P 4(p), x2
|
||||
|
||||
MUL prime1, x2
|
||||
ROR $64-23, h
|
||||
EOR x2 @> 64-23, h, h
|
||||
MADD h, prime3, prime2, h
|
||||
|
||||
try2:
|
||||
TBZ $1, n, try1
|
||||
MOVHU.P 2(p), x3
|
||||
AND $255, x3, x1
|
||||
LSR $8, x3, x2
|
||||
|
||||
MUL prime5, x1
|
||||
ROR $64-11, h
|
||||
EOR x1 @> 64-11, h, h
|
||||
MUL prime1, h
|
||||
|
||||
MUL prime5, x2
|
||||
ROR $64-11, h
|
||||
EOR x2 @> 64-11, h, h
|
||||
MUL prime1, h
|
||||
|
||||
try1:
|
||||
TBZ $0, n, finalize
|
||||
MOVBU (p), x4
|
||||
|
||||
MUL prime5, x4
|
||||
ROR $64-11, h
|
||||
EOR x4 @> 64-11, h, h
|
||||
MUL prime1, h
|
||||
|
||||
finalize:
|
||||
EOR h >> 33, h
|
||||
MUL prime2, h
|
||||
EOR h >> 29, h
|
||||
MUL prime3, h
|
||||
EOR h >> 32, h
|
||||
|
||||
MOVD h, ret+24(FP)
|
||||
RET
|
||||
|
||||
// func writeBlocks(d *Digest, b []byte) int
|
||||
TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
|
||||
LDP ·primes+0(SB), (prime1, prime2)
|
||||
|
||||
// Load state. Assume v[1-4] are stored contiguously.
|
||||
MOVD d+0(FP), digest
|
||||
LDP 0(digest), (v1, v2)
|
||||
LDP 16(digest), (v3, v4)
|
||||
|
||||
LDP b_base+8(FP), (p, n)
|
||||
|
||||
blockLoop()
|
||||
|
||||
// Store updated state.
|
||||
STP (v1, v2), 0(digest)
|
||||
STP (v3, v4), 16(digest)
|
||||
|
||||
BIC $31, n
|
||||
MOVD n, ret+32(FP)
|
||||
RET
|
15
vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
generated
vendored
Normal file
15
vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
//go:build (amd64 || arm64) && !appengine && gc && !purego
|
||||
// +build amd64 arm64
|
||||
// +build !appengine
|
||||
// +build gc
|
||||
// +build !purego
|
||||
|
||||
package xxhash
|
||||
|
||||
// Sum64 computes the 64-bit xxHash digest of b.
|
||||
//
|
||||
//go:noescape
|
||||
func Sum64(b []byte) uint64
|
||||
|
||||
//go:noescape
|
||||
func writeBlocks(d *Digest, b []byte) int
|
76
vendor/github.com/cespare/xxhash/v2/xxhash_other.go
generated
vendored
Normal file
76
vendor/github.com/cespare/xxhash/v2/xxhash_other.go
generated
vendored
Normal file
@ -0,0 +1,76 @@
|
||||
//go:build (!amd64 && !arm64) || appengine || !gc || purego
|
||||
// +build !amd64,!arm64 appengine !gc purego
|
||||
|
||||
package xxhash
|
||||
|
||||
// Sum64 computes the 64-bit xxHash digest of b.
|
||||
func Sum64(b []byte) uint64 {
|
||||
// A simpler version would be
|
||||
// d := New()
|
||||
// d.Write(b)
|
||||
// return d.Sum64()
|
||||
// but this is faster, particularly for small inputs.
|
||||
|
||||
n := len(b)
|
||||
var h uint64
|
||||
|
||||
if n >= 32 {
|
||||
v1 := primes[0] + prime2
|
||||
v2 := prime2
|
||||
v3 := uint64(0)
|
||||
v4 := -primes[0]
|
||||
for len(b) >= 32 {
|
||||
v1 = round(v1, u64(b[0:8:len(b)]))
|
||||
v2 = round(v2, u64(b[8:16:len(b)]))
|
||||
v3 = round(v3, u64(b[16:24:len(b)]))
|
||||
v4 = round(v4, u64(b[24:32:len(b)]))
|
||||
b = b[32:len(b):len(b)]
|
||||
}
|
||||
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
|
||||
h = mergeRound(h, v1)
|
||||
h = mergeRound(h, v2)
|
||||
h = mergeRound(h, v3)
|
||||
h = mergeRound(h, v4)
|
||||
} else {
|
||||
h = prime5
|
||||
}
|
||||
|
||||
h += uint64(n)
|
||||
|
||||
for ; len(b) >= 8; b = b[8:] {
|
||||
k1 := round(0, u64(b[:8]))
|
||||
h ^= k1
|
||||
h = rol27(h)*prime1 + prime4
|
||||
}
|
||||
if len(b) >= 4 {
|
||||
h ^= uint64(u32(b[:4])) * prime1
|
||||
h = rol23(h)*prime2 + prime3
|
||||
b = b[4:]
|
||||
}
|
||||
for ; len(b) > 0; b = b[1:] {
|
||||
h ^= uint64(b[0]) * prime5
|
||||
h = rol11(h) * prime1
|
||||
}
|
||||
|
||||
h ^= h >> 33
|
||||
h *= prime2
|
||||
h ^= h >> 29
|
||||
h *= prime3
|
||||
h ^= h >> 32
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
func writeBlocks(d *Digest, b []byte) int {
|
||||
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
|
||||
n := len(b)
|
||||
for len(b) >= 32 {
|
||||
v1 = round(v1, u64(b[0:8:len(b)]))
|
||||
v2 = round(v2, u64(b[8:16:len(b)]))
|
||||
v3 = round(v3, u64(b[16:24:len(b)]))
|
||||
v4 = round(v4, u64(b[24:32:len(b)]))
|
||||
b = b[32:len(b):len(b)]
|
||||
}
|
||||
d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
|
||||
return n - len(b)
|
||||
}
|
16
vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
generated
vendored
Normal file
16
vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
generated
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
//go:build appengine
|
||||
// +build appengine
|
||||
|
||||
// This file contains the safe implementations of otherwise unsafe-using code.
|
||||
|
||||
package xxhash
|
||||
|
||||
// Sum64String computes the 64-bit xxHash digest of s.
|
||||
func Sum64String(s string) uint64 {
|
||||
return Sum64([]byte(s))
|
||||
}
|
||||
|
||||
// WriteString adds more data to d. It always returns len(s), nil.
|
||||
func (d *Digest) WriteString(s string) (n int, err error) {
|
||||
return d.Write([]byte(s))
|
||||
}
|
58
vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
generated
vendored
Normal file
58
vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
generated
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
//go:build !appengine
|
||||
// +build !appengine
|
||||
|
||||
// This file encapsulates usage of unsafe.
|
||||
// xxhash_safe.go contains the safe implementations.
|
||||
|
||||
package xxhash
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// In the future it's possible that compiler optimizations will make these
|
||||
// XxxString functions unnecessary by realizing that calls such as
|
||||
// Sum64([]byte(s)) don't need to copy s. See https://go.dev/issue/2205.
|
||||
// If that happens, even if we keep these functions they can be replaced with
|
||||
// the trivial safe code.
|
||||
|
||||
// NOTE: The usual way of doing an unsafe string-to-[]byte conversion is:
|
||||
//
|
||||
// var b []byte
|
||||
// bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
|
||||
// bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
|
||||
// bh.Len = len(s)
|
||||
// bh.Cap = len(s)
|
||||
//
|
||||
// Unfortunately, as of Go 1.15.3 the inliner's cost model assigns a high enough
|
||||
// weight to this sequence of expressions that any function that uses it will
|
||||
// not be inlined. Instead, the functions below use a different unsafe
|
||||
// conversion designed to minimize the inliner weight and allow both to be
|
||||
// inlined. There is also a test (TestInlining) which verifies that these are
|
||||
// inlined.
|
||||
//
|
||||
// See https://github.com/golang/go/issues/42739 for discussion.
|
||||
|
||||
// Sum64String computes the 64-bit xxHash digest of s.
|
||||
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
|
||||
func Sum64String(s string) uint64 {
|
||||
b := *(*[]byte)(unsafe.Pointer(&sliceHeader{s, len(s)}))
|
||||
return Sum64(b)
|
||||
}
|
||||
|
||||
// WriteString adds more data to d. It always returns len(s), nil.
|
||||
// It may be faster than Write([]byte(s)) by avoiding a copy.
|
||||
func (d *Digest) WriteString(s string) (n int, err error) {
|
||||
d.Write(*(*[]byte)(unsafe.Pointer(&sliceHeader{s, len(s)})))
|
||||
// d.Write always returns len(s), nil.
|
||||
// Ignoring the return output and returning these fixed values buys a
|
||||
// savings of 6 in the inliner's cost model.
|
||||
return len(s), nil
|
||||
}
|
||||
|
||||
// sliceHeader is similar to reflect.SliceHeader, but it assumes that the layout
|
||||
// of the first two words is the same as the layout of a string.
|
||||
type sliceHeader struct {
|
||||
s string
|
||||
cap int
|
||||
}
|
3
vendor/modules.txt
vendored
3
vendor/modules.txt
vendored
@ -46,6 +46,9 @@ github.com/c-bata/go-prompt/completer
|
||||
# github.com/c4milo/unpackit v0.0.0-20170704181138-4ed373e9ef1c
|
||||
## explicit
|
||||
github.com/c4milo/unpackit
|
||||
# github.com/cespare/xxhash/v2 v2.2.0
|
||||
## explicit; go 1.11
|
||||
github.com/cespare/xxhash/v2
|
||||
# github.com/compose-spec/compose-go v0.0.0-20210916141509-a7e1bc322970 => ./pkg/third_party/compose-go
|
||||
## explicit; go 1.16
|
||||
github.com/compose-spec/compose-go/errdefs
|
||||
|
Loading…
Reference in New Issue
Block a user