Oct 2 19:07:33.147220 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:07:33.147249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:07:33.147260 kernel: BIOS-provided physical RAM map: Oct 2 19:07:33.147268 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:07:33.147276 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:07:33.147284 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:07:33.147293 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:07:33.147301 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:07:33.147311 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:07:33.147319 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:07:33.147327 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:07:33.147335 kernel: NX (Execute Disable) protection: active Oct 2 19:07:33.147343 kernel: SMBIOS 2.8 present. Oct 2 19:07:33.147351 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:07:33.147362 kernel: Hypervisor detected: KVM Oct 2 19:07:33.147371 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:07:33.147379 kernel: kvm-clock: cpu 0, msr 68f8a001, primary cpu clock Oct 2 19:07:33.147386 kernel: kvm-clock: using sched offset of 2984975053 cycles Oct 2 19:07:33.147396 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:07:33.147405 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:07:33.147414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:07:33.147423 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:07:33.147432 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:07:33.147443 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:07:33.147451 kernel: Using GB pages for direct mapping Oct 2 19:07:33.147460 kernel: ACPI: Early table checksum verification disabled Oct 2 19:07:33.147469 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:07:33.147478 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147487 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147496 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147504 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:07:33.147513 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147524 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147533 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:07:33.147541 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:07:33.147550 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:07:33.147558 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:07:33.147567 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:07:33.147575 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:07:33.147584 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:07:33.147598 kernel: No NUMA configuration found Oct 2 19:07:33.147607 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:07:33.147616 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:07:33.147626 kernel: Zone ranges: Oct 2 19:07:33.147635 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:07:33.147645 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:07:33.147656 kernel: Normal empty Oct 2 19:07:33.147665 kernel: Movable zone start for each node Oct 2 19:07:33.147674 kernel: Early memory node ranges Oct 2 19:07:33.147684 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:07:33.147896 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:07:33.147908 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:07:33.147917 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:07:33.147926 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:07:33.148069 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:07:33.148089 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:07:33.148098 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:07:33.148108 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:07:33.148117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:07:33.148127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:07:33.148136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:07:33.148145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:07:33.148155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:07:33.148164 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:07:33.148175 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:07:33.148184 kernel: TSC deadline timer available Oct 2 19:07:33.148192 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:07:33.148201 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:07:33.148210 kernel: kvm-guest: setup PV sched yield Oct 2 19:07:33.148219 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:07:33.148228 kernel: Booting paravirtualized kernel on KVM Oct 2 19:07:33.148237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:07:33.148247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:07:33.148258 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:07:33.148268 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:07:33.148277 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:07:33.148286 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:07:33.148295 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:07:33.148304 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:07:33.148314 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:07:33.148323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:07:33.148333 kernel: Policy zone: DMA32 Oct 2 19:07:33.148344 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:07:33.148356 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:07:33.148365 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:07:33.148375 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:07:33.148384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:07:33.148393 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:07:33.148402 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:07:33.148411 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:07:33.148422 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:07:33.148431 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:07:33.148442 kernel: rcu: RCU event tracing is enabled. Oct 2 19:07:33.148452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:07:33.148462 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:07:33.148471 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:07:33.148481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:07:33.148491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:07:33.148502 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:07:33.148515 kernel: random: crng init done Oct 2 19:07:33.148525 kernel: Console: colour VGA+ 80x25 Oct 2 19:07:33.148535 kernel: printk: console [ttyS0] enabled Oct 2 19:07:33.148544 kernel: ACPI: Core revision 20210730 Oct 2 19:07:33.148554 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:07:33.148563 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:07:33.148572 kernel: x2apic enabled Oct 2 19:07:33.148581 kernel: Switched APIC routing to physical x2apic. Oct 2 19:07:33.148589 kernel: kvm-guest: setup PV IPIs Oct 2 19:07:33.148599 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:07:33.148610 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:07:33.148619 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:07:33.148629 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:07:33.148638 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:07:33.148648 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:07:33.148657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:07:33.148667 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:07:33.148677 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:07:33.148686 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:07:33.148703 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:07:33.148713 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:07:33.148724 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:07:33.148735 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:07:33.148758 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:07:33.148767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:07:33.148777 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:07:33.148786 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:07:33.148796 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:07:33.148807 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:07:33.148816 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:07:33.148826 kernel: LSM: Security Framework initializing Oct 2 19:07:33.148834 kernel: SELinux: Initializing. Oct 2 19:07:33.148844 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:07:33.148853 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:07:33.148863 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:07:33.148873 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:07:33.148883 kernel: ... version: 0 Oct 2 19:07:33.148892 kernel: ... bit width: 48 Oct 2 19:07:33.148901 kernel: ... generic registers: 6 Oct 2 19:07:33.148910 kernel: ... value mask: 0000ffffffffffff Oct 2 19:07:33.148919 kernel: ... max period: 00007fffffffffff Oct 2 19:07:33.148928 kernel: ... fixed-purpose events: 0 Oct 2 19:07:33.148961 kernel: ... event mask: 000000000000003f Oct 2 19:07:33.148971 kernel: signal: max sigframe size: 1776 Oct 2 19:07:33.148982 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:07:33.148992 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:07:33.149001 kernel: x86: Booting SMP configuration: Oct 2 19:07:33.149011 kernel: .... node #0, CPUs: #1 Oct 2 19:07:33.149021 kernel: kvm-clock: cpu 1, msr 68f8a041, secondary cpu clock Oct 2 19:07:33.149031 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:07:33.149041 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:07:33.149050 kernel: #2 Oct 2 19:07:33.149060 kernel: kvm-clock: cpu 2, msr 68f8a081, secondary cpu clock Oct 2 19:07:33.149070 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:07:33.149081 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:07:33.149091 kernel: #3 Oct 2 19:07:33.149101 kernel: kvm-clock: cpu 3, msr 68f8a0c1, secondary cpu clock Oct 2 19:07:33.149110 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:07:33.149120 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:07:33.149130 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:07:33.149140 kernel: smpboot: Max logical packages: 1 Oct 2 19:07:33.149149 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:07:33.149159 kernel: devtmpfs: initialized Oct 2 19:07:33.149170 kernel: x86/mm: Memory block size: 128MB Oct 2 19:07:33.149180 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:07:33.149189 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:07:33.149198 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:07:33.149208 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:07:33.149217 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:07:33.149226 kernel: audit: type=2000 audit(1696273652.420:1): state=initialized audit_enabled=0 res=1 Oct 2 19:07:33.149235 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:07:33.149244 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:07:33.149255 kernel: cpuidle: using governor menu Oct 2 19:07:33.149264 kernel: ACPI: bus type PCI registered Oct 2 19:07:33.149273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:07:33.149282 kernel: dca service started, version 1.12.1 Oct 2 19:07:33.149291 kernel: PCI: Using configuration type 1 for base access Oct 2 19:07:33.149322 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:07:33.149331 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:07:33.149341 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:07:33.149350 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:07:33.149361 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:07:33.149371 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:07:33.149381 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:07:33.149390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:07:33.149400 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:07:33.149410 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:07:33.149420 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:07:33.149430 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:07:33.149440 kernel: ACPI: Interpreter enabled Oct 2 19:07:33.149451 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:07:33.149461 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:07:33.149471 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:07:33.149481 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:07:33.149491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:07:33.149677 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:07:33.149694 kernel: acpiphp: Slot [3] registered Oct 2 19:07:33.149704 kernel: acpiphp: Slot [4] registered Oct 2 19:07:33.149716 kernel: acpiphp: Slot [5] registered Oct 2 19:07:33.149725 kernel: acpiphp: Slot [6] registered Oct 2 19:07:33.149734 kernel: acpiphp: Slot [7] registered Oct 2 19:07:33.149754 kernel: acpiphp: Slot [8] registered Oct 2 19:07:33.149763 kernel: acpiphp: Slot [9] registered Oct 2 19:07:33.149773 kernel: acpiphp: Slot [10] registered Oct 2 19:07:33.149783 kernel: acpiphp: Slot [11] registered Oct 2 19:07:33.149792 kernel: acpiphp: Slot [12] registered Oct 2 19:07:33.149802 kernel: acpiphp: Slot [13] registered Oct 2 19:07:33.149812 kernel: acpiphp: Slot [14] registered Oct 2 19:07:33.149824 kernel: acpiphp: Slot [15] registered Oct 2 19:07:33.149833 kernel: acpiphp: Slot [16] registered Oct 2 19:07:33.149843 kernel: acpiphp: Slot [17] registered Oct 2 19:07:33.149853 kernel: acpiphp: Slot [18] registered Oct 2 19:07:33.149862 kernel: acpiphp: Slot [19] registered Oct 2 19:07:33.149872 kernel: acpiphp: Slot [20] registered Oct 2 19:07:33.149882 kernel: acpiphp: Slot [21] registered Oct 2 19:07:33.149891 kernel: acpiphp: Slot [22] registered Oct 2 19:07:33.149901 kernel: acpiphp: Slot [23] registered Oct 2 19:07:33.149912 kernel: acpiphp: Slot [24] registered Oct 2 19:07:33.149922 kernel: acpiphp: Slot [25] registered Oct 2 19:07:33.149931 kernel: acpiphp: Slot [26] registered Oct 2 19:07:33.149951 kernel: acpiphp: Slot [27] registered Oct 2 19:07:33.149960 kernel: acpiphp: Slot [28] registered Oct 2 19:07:33.149969 kernel: acpiphp: Slot [29] registered Oct 2 19:07:33.149977 kernel: acpiphp: Slot [30] registered Oct 2 19:07:33.149986 kernel: acpiphp: Slot [31] registered Oct 2 19:07:33.149996 kernel: PCI host bridge to bus 0000:00 Oct 2 19:07:33.150121 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:07:33.150215 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:07:33.150304 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:07:33.150389 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:07:33.150477 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:07:33.150564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:07:33.150693 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:07:33.150825 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:07:33.150974 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:07:33.151079 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:07:33.151179 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:07:33.151278 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:07:33.151375 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:07:33.151478 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:07:33.151608 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:07:33.151711 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:07:33.151814 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:07:33.151918 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:07:33.152052 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:07:33.152151 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:07:33.152248 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:07:33.152333 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:07:33.152445 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:07:33.152545 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:07:33.152653 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:07:33.152766 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:07:33.152887 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:07:33.153006 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:07:33.153105 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:07:33.153203 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:07:33.153318 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:07:33.153420 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:07:33.153518 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:07:33.153616 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:07:33.153719 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:07:33.153733 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:07:33.153754 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:07:33.153764 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:07:33.153774 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:07:33.153784 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:07:33.153794 kernel: iommu: Default domain type: Translated Oct 2 19:07:33.153804 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:07:33.153903 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:07:33.154019 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:07:33.154118 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:07:33.154132 kernel: vgaarb: loaded Oct 2 19:07:33.154143 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:07:33.154153 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:07:33.154163 kernel: PTP clock support registered Oct 2 19:07:33.154173 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:07:33.154184 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:07:33.154197 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:07:33.154207 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:07:33.154217 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:07:33.154227 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:07:33.154236 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:07:33.154246 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:07:33.154256 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:07:33.154265 kernel: pnp: PnP ACPI init Oct 2 19:07:33.154386 kernel: pnp 00:02: [dma 2] Oct 2 19:07:33.154404 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:07:33.154414 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:07:33.154423 kernel: NET: Registered PF_INET protocol family Oct 2 19:07:33.154433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:07:33.154443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:07:33.154452 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:07:33.154463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:07:33.154473 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:07:33.154486 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:07:33.154496 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:07:33.154506 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:07:33.154515 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:07:33.154525 kernel: NET: Registered PF_XDP protocol family Oct 2 19:07:33.154629 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:07:33.154736 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:07:33.154863 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:07:33.155026 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:07:33.155146 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:07:33.155234 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:07:33.155314 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:07:33.155411 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:07:33.155428 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:07:33.155451 kernel: Initialise system trusted keyrings Oct 2 19:07:33.155465 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:07:33.155479 kernel: Key type asymmetric registered Oct 2 19:07:33.155491 kernel: Asymmetric key parser 'x509' registered Oct 2 19:07:33.155513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:07:33.155524 kernel: io scheduler mq-deadline registered Oct 2 19:07:33.155540 kernel: io scheduler kyber registered Oct 2 19:07:33.155550 kernel: io scheduler bfq registered Oct 2 19:07:33.155559 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:07:33.155581 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:07:33.155592 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:07:33.155607 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:07:33.155622 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:07:33.155632 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:07:33.155642 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:07:33.155652 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:07:33.155662 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:07:33.155823 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:07:33.155841 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:07:33.155976 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:07:33.156104 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:07:32 UTC (1696273652) Oct 2 19:07:33.156227 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:07:33.156245 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:07:33.156255 kernel: Segment Routing with IPv6 Oct 2 19:07:33.156277 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:07:33.156288 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:07:33.156302 kernel: Key type dns_resolver registered Oct 2 19:07:33.156315 kernel: IPI shorthand broadcast: enabled Oct 2 19:07:33.156325 kernel: sched_clock: Marking stable (449238202, 119960726)->(659687460, -90488532) Oct 2 19:07:33.156337 kernel: registered taskstats version 1 Oct 2 19:07:33.156347 kernel: Loading compiled-in X.509 certificates Oct 2 19:07:33.156357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:07:33.156374 kernel: Key type .fscrypt registered Oct 2 19:07:33.156389 kernel: Key type fscrypt-provisioning registered Oct 2 19:07:33.156399 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:07:33.156414 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:07:33.156425 kernel: ima: No architecture policies found Oct 2 19:07:33.156437 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:07:33.156446 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:07:33.156470 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:07:33.156485 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:07:33.156497 kernel: Run /init as init process Oct 2 19:07:33.156519 kernel: with arguments: Oct 2 19:07:33.156529 kernel: /init Oct 2 19:07:33.156545 kernel: with environment: Oct 2 19:07:33.156568 kernel: HOME=/ Oct 2 19:07:33.156594 kernel: TERM=linux Oct 2 19:07:33.156609 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:07:33.156625 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:07:33.156651 systemd[1]: Detected virtualization kvm. Oct 2 19:07:33.156664 systemd[1]: Detected architecture x86-64. Oct 2 19:07:33.156681 systemd[1]: Running in initrd. Oct 2 19:07:33.156691 systemd[1]: No hostname configured, using default hostname. Oct 2 19:07:33.156718 systemd[1]: Hostname set to . Oct 2 19:07:33.156735 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:07:33.156760 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:07:33.156783 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:07:33.156797 systemd[1]: Reached target cryptsetup.target. Oct 2 19:07:33.156812 systemd[1]: Reached target paths.target. Oct 2 19:07:33.156834 systemd[1]: Reached target slices.target. Oct 2 19:07:33.156846 systemd[1]: Reached target swap.target. Oct 2 19:07:33.156864 systemd[1]: Reached target timers.target. Oct 2 19:07:33.156891 systemd[1]: Listening on iscsid.socket. Oct 2 19:07:33.156903 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:07:33.156921 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:07:33.156933 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:07:33.156956 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:07:33.156966 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:07:33.156989 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:07:33.157003 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:07:33.157022 systemd[1]: Reached target sockets.target. Oct 2 19:07:33.157033 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:07:33.157043 systemd[1]: Finished network-cleanup.service. Oct 2 19:07:33.157064 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:07:33.157074 systemd[1]: Starting systemd-journald.service... Oct 2 19:07:33.157083 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:07:33.157098 systemd[1]: Starting systemd-resolved.service... Oct 2 19:07:33.157109 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:07:33.157126 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:07:33.157147 systemd-journald[197]: Journal started Oct 2 19:07:33.157240 systemd-journald[197]: Runtime Journal (/run/log/journal/4920c2c570114e279d36d5f5e1a8ec89) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:07:33.134235 systemd-modules-load[198]: Inserted module 'overlay' Oct 2 19:07:33.187219 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:07:33.187248 kernel: Bridge firewalling registered Oct 2 19:07:33.187261 systemd[1]: Started systemd-journald.service. Oct 2 19:07:33.187283 kernel: audit: type=1130 audit(1696273653.173:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.187297 kernel: audit: type=1130 audit(1696273653.174:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.187310 kernel: audit: type=1130 audit(1696273653.174:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.187323 kernel: audit: type=1130 audit(1696273653.174:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.162066 systemd-modules-load[198]: Inserted module 'br_netfilter' Oct 2 19:07:33.192212 kernel: audit: type=1130 audit(1696273653.187:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.166175 systemd-resolved[199]: Positive Trust Anchors: Oct 2 19:07:33.166184 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:07:33.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.166213 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:07:33.203098 kernel: audit: type=1130 audit(1696273653.199:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.168872 systemd-resolved[199]: Defaulting to hostname 'linux'. Oct 2 19:07:33.174961 systemd[1]: Started systemd-resolved.service. Oct 2 19:07:33.175556 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:07:33.175861 systemd[1]: Reached target nss-lookup.target. Oct 2 19:07:33.178812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:07:33.187293 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:07:33.188839 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:07:33.196251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:07:33.213979 kernel: SCSI subsystem initialized Oct 2 19:07:33.216556 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:07:33.220390 kernel: audit: type=1130 audit(1696273653.216:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.220908 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:07:33.225967 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:07:33.226000 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:07:33.227005 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:07:33.230119 systemd-modules-load[198]: Inserted module 'dm_multipath' Oct 2 19:07:33.230717 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:07:33.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.235199 kernel: audit: type=1130 audit(1696273653.230:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.232538 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:07:33.243653 dracut-cmdline[215]: dracut-dracut-053 Oct 2 19:07:33.245586 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:07:33.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.250343 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:07:33.256295 kernel: audit: type=1130 audit(1696273653.245:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.336967 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:07:33.352795 kernel: iscsi: registered transport (tcp) Oct 2 19:07:33.378985 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:07:33.379064 kernel: QLogic iSCSI HBA Driver Oct 2 19:07:33.410259 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:07:33.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.413082 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:07:33.460976 kernel: raid6: avx2x4 gen() 29324 MB/s Oct 2 19:07:33.477971 kernel: raid6: avx2x4 xor() 6371 MB/s Oct 2 19:07:33.494973 kernel: raid6: avx2x2 gen() 29264 MB/s Oct 2 19:07:33.512025 kernel: raid6: avx2x2 xor() 18153 MB/s Oct 2 19:07:33.528970 kernel: raid6: avx2x1 gen() 22495 MB/s Oct 2 19:07:33.546014 kernel: raid6: avx2x1 xor() 13025 MB/s Oct 2 19:07:33.564258 kernel: raid6: sse2x4 gen() 11484 MB/s Oct 2 19:07:33.579987 kernel: raid6: sse2x4 xor() 4287 MB/s Oct 2 19:07:33.596990 kernel: raid6: sse2x2 gen() 11633 MB/s Oct 2 19:07:33.613982 kernel: raid6: sse2x2 xor() 7776 MB/s Oct 2 19:07:33.630994 kernel: raid6: sse2x1 gen() 9594 MB/s Oct 2 19:07:33.648488 kernel: raid6: sse2x1 xor() 6880 MB/s Oct 2 19:07:33.648752 kernel: raid6: using algorithm avx2x4 gen() 29324 MB/s Oct 2 19:07:33.648766 kernel: raid6: .... xor() 6371 MB/s, rmw enabled Oct 2 19:07:33.648775 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:07:33.663062 kernel: xor: automatically using best checksumming function avx Oct 2 19:07:33.768990 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:07:33.777010 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:07:33.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.777000 audit: BPF prog-id=7 op=LOAD Oct 2 19:07:33.777000 audit: BPF prog-id=8 op=LOAD Oct 2 19:07:33.778622 systemd[1]: Starting systemd-udevd.service... Oct 2 19:07:33.791535 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 2 19:07:33.796356 systemd[1]: Started systemd-udevd.service. Oct 2 19:07:33.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.799803 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:07:33.810314 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Oct 2 19:07:33.834481 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:07:33.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.835767 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:07:33.875434 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:07:33.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:33.917425 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:07:33.917715 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:07:33.931489 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:07:33.931544 kernel: AES CTR mode by8 optimization enabled Oct 2 19:07:33.931960 kernel: libata version 3.00 loaded. Oct 2 19:07:33.935967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:07:33.939960 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:07:33.944970 kernel: scsi host0: ata_piix Oct 2 19:07:33.947990 kernel: scsi host1: ata_piix Oct 2 19:07:33.948953 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:07:33.948972 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:07:33.969383 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:07:33.979363 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Oct 2 19:07:33.979215 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:07:33.991079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:07:33.996089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:07:34.001735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:07:34.004298 systemd[1]: Starting disk-uuid.service... Oct 2 19:07:34.108070 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:07:34.110051 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:07:34.140152 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:07:34.140412 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:07:34.157962 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:07:34.283966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:07:34.286976 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:07:34.290966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:07:35.434962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:07:35.435535 disk-uuid[523]: The operation has completed successfully. Oct 2 19:07:35.463556 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:07:35.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.463640 systemd[1]: Finished disk-uuid.service. Oct 2 19:07:35.468453 systemd[1]: Starting verity-setup.service... Oct 2 19:07:35.484964 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:07:35.516200 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:07:35.518722 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:07:35.521128 systemd[1]: Finished verity-setup.service. Oct 2 19:07:35.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.589973 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:07:35.590355 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:07:35.591464 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:07:35.593042 systemd[1]: Starting ignition-setup.service... Oct 2 19:07:35.594529 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:07:35.601361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:07:35.601392 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:07:35.601410 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:07:35.609454 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:07:35.625762 systemd[1]: Finished ignition-setup.service. Oct 2 19:07:35.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.627428 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:07:35.657904 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:07:35.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.659000 audit: BPF prog-id=9 op=LOAD Oct 2 19:07:35.659792 systemd[1]: Starting systemd-networkd.service... Oct 2 19:07:35.680690 systemd-networkd[695]: lo: Link UP Oct 2 19:07:35.680698 systemd-networkd[695]: lo: Gained carrier Oct 2 19:07:35.681145 systemd-networkd[695]: Enumeration completed Oct 2 19:07:35.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.681314 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:07:35.681441 systemd[1]: Started systemd-networkd.service. Oct 2 19:07:35.682121 systemd-networkd[695]: eth0: Link UP Oct 2 19:07:35.682125 systemd-networkd[695]: eth0: Gained carrier Oct 2 19:07:35.682169 systemd[1]: Reached target network.target. Oct 2 19:07:35.684139 systemd[1]: Starting iscsiuio.service... Oct 2 19:07:35.689841 systemd[1]: Started iscsiuio.service. Oct 2 19:07:35.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.691753 systemd[1]: Starting iscsid.service... Oct 2 19:07:35.694687 iscsid[705]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:07:35.694687 iscsid[705]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:07:35.694687 iscsid[705]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:07:35.694687 iscsid[705]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:07:35.694687 iscsid[705]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:07:35.694687 iscsid[705]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:07:35.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.696069 systemd[1]: Started iscsid.service. Oct 2 19:07:35.700215 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:07:35.708910 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:07:35.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.710051 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:07:35.710599 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:07:35.711643 systemd[1]: Reached target remote-fs.target. Oct 2 19:07:35.713311 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:07:35.720197 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:07:35.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.731037 systemd-networkd[695]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:07:35.890322 ignition[644]: Ignition 2.14.0 Oct 2 19:07:35.890335 ignition[644]: Stage: fetch-offline Oct 2 19:07:35.890410 ignition[644]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:35.890420 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:35.890562 ignition[644]: parsed url from cmdline: "" Oct 2 19:07:35.890566 ignition[644]: no config URL provided Oct 2 19:07:35.890570 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:07:35.890577 ignition[644]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:07:35.890604 ignition[644]: op(1): [started] loading QEMU firmware config module Oct 2 19:07:35.890609 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:07:35.898640 ignition[644]: op(1): [finished] loading QEMU firmware config module Oct 2 19:07:35.911629 ignition[644]: parsing config with SHA512: 08561e0e98b956623cd8de571de0dc3a7961550f13c8ef30897f59fe725321970a47b8095a42d2d3c95aeec5fb6681ec7640568212c7cb3263b94103b050e953 Oct 2 19:07:35.960530 unknown[644]: fetched base config from "system" Oct 2 19:07:35.960541 unknown[644]: fetched user config from "qemu" Oct 2 19:07:35.960994 ignition[644]: fetch-offline: fetch-offline passed Oct 2 19:07:35.961082 ignition[644]: Ignition finished successfully Oct 2 19:07:35.962953 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:07:35.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.964207 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:07:35.965217 systemd[1]: Starting ignition-kargs.service... Oct 2 19:07:35.979879 ignition[721]: Ignition 2.14.0 Oct 2 19:07:35.979889 ignition[721]: Stage: kargs Oct 2 19:07:35.980007 ignition[721]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:35.980017 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:35.981083 ignition[721]: kargs: kargs passed Oct 2 19:07:35.982726 systemd[1]: Finished ignition-kargs.service. Oct 2 19:07:35.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.981147 ignition[721]: Ignition finished successfully Oct 2 19:07:35.984512 systemd[1]: Starting ignition-disks.service... Oct 2 19:07:35.993202 ignition[727]: Ignition 2.14.0 Oct 2 19:07:35.993214 ignition[727]: Stage: disks Oct 2 19:07:35.993354 ignition[727]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:35.993364 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:35.994241 ignition[727]: disks: disks passed Oct 2 19:07:35.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:35.995426 systemd[1]: Finished ignition-disks.service. Oct 2 19:07:35.994280 ignition[727]: Ignition finished successfully Oct 2 19:07:35.996337 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:07:35.997182 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:07:35.997778 systemd[1]: Reached target local-fs.target. Oct 2 19:07:35.998291 systemd[1]: Reached target sysinit.target. Oct 2 19:07:35.998468 systemd[1]: Reached target basic.target. Oct 2 19:07:35.999500 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:07:36.011355 systemd-fsck[735]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:07:36.018799 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:07:36.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:36.020173 systemd[1]: Mounting sysroot.mount... Oct 2 19:07:36.029957 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:07:36.030103 systemd[1]: Mounted sysroot.mount. Oct 2 19:07:36.030492 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:07:36.032838 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:07:36.033580 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:07:36.033623 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:07:36.033654 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:07:36.036401 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:07:36.038302 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:07:36.043406 initrd-setup-root[745]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:07:36.047842 initrd-setup-root[753]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:07:36.051645 initrd-setup-root[761]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:07:36.055518 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:07:36.087930 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:07:36.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:36.089262 systemd[1]: Starting ignition-mount.service... Oct 2 19:07:36.090757 systemd[1]: Starting sysroot-boot.service... Oct 2 19:07:36.095625 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:07:36.106148 systemd[1]: Finished sysroot-boot.service. Oct 2 19:07:36.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:36.107298 ignition[788]: INFO : Ignition 2.14.0 Oct 2 19:07:36.107298 ignition[788]: INFO : Stage: mount Oct 2 19:07:36.107298 ignition[788]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:36.107298 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:36.109948 ignition[788]: INFO : mount: mount passed Oct 2 19:07:36.109948 ignition[788]: INFO : Ignition finished successfully Oct 2 19:07:36.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:36.109479 systemd[1]: Finished ignition-mount.service. Oct 2 19:07:36.557188 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:07:36.607995 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (796) Oct 2 19:07:36.617812 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:07:36.617886 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:07:36.617902 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:07:36.632879 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:07:36.635292 systemd[1]: Starting ignition-files.service... Oct 2 19:07:36.687507 ignition[816]: INFO : Ignition 2.14.0 Oct 2 19:07:36.688698 ignition[816]: INFO : Stage: files Oct 2 19:07:36.688698 ignition[816]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:36.688698 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:36.695196 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:07:36.701289 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:07:36.701289 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:07:36.734894 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:07:36.734894 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:07:36.746460 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:07:36.744522 unknown[816]: wrote ssh authorized keys file for user: core Oct 2 19:07:36.755087 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:07:36.755087 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:07:37.002776 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:07:37.124830 systemd-networkd[695]: eth0: Gained IPv6LL Oct 2 19:07:38.145974 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:07:38.145974 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:07:38.145974 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:07:38.145974 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:07:38.257646 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:07:38.691213 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:07:38.691213 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:07:38.708625 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:07:38.708625 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:07:38.841659 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:07:39.794832 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 19:07:39.794832 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:07:39.812122 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:07:39.812122 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:07:39.885630 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:07:41.559193 ignition[816]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 19:07:41.576565 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:07:41.576565 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:07:41.576565 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:07:41.576565 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:07:41.576565 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:07:41.576565 ignition[816]: INFO : files: op(9): [started] processing unit "coreos-metadata.service" Oct 2 19:07:41.712182 ignition[816]: INFO : files: op(9): op(a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(9): op(a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(9): [finished] processing unit "coreos-metadata.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(b): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(b): op(c): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(b): op(c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(b): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(d): [started] processing unit "prepare-critools.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(d): op(e): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(d): op(e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(d): [finished] processing unit "prepare-critools.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:07:41.734897 ignition[816]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:07:42.362738 ignition[816]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: op(11): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:07:42.364307 ignition[816]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:07:42.364307 ignition[816]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:07:42.364307 ignition[816]: INFO : files: files passed Oct 2 19:07:42.364307 ignition[816]: INFO : Ignition finished successfully Oct 2 19:07:42.373880 systemd[1]: Finished ignition-files.service. Oct 2 19:07:42.385337 kernel: kauditd_printk_skb: 25 callbacks suppressed Oct 2 19:07:42.385392 kernel: audit: type=1130 audit(1696273662.380:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.407174 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:07:42.408261 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:07:42.409859 systemd[1]: Starting ignition-quench.service... Oct 2 19:07:42.439695 initrd-setup-root-after-ignition[840]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:07:42.461525 kernel: audit: type=1130 audit(1696273662.443:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.461558 kernel: audit: type=1131 audit(1696273662.443:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.461572 kernel: audit: type=1130 audit(1696273662.444:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.461834 initrd-setup-root-after-ignition[842]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:07:42.444450 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:07:42.444575 systemd[1]: Finished ignition-quench.service. Oct 2 19:07:42.445028 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:07:42.445166 systemd[1]: Reached target ignition-complete.target. Oct 2 19:07:42.448363 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:07:42.501837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:07:42.573253 kernel: audit: type=1130 audit(1696273662.505:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.573286 kernel: audit: type=1131 audit(1696273662.505:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.501974 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:07:42.506069 systemd[1]: Reached target initrd-fs.target. Oct 2 19:07:42.506853 systemd[1]: Reached target initrd.target. Oct 2 19:07:42.571159 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:07:42.577062 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:07:42.603444 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:07:42.611568 kernel: audit: type=1130 audit(1696273662.605:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.609378 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:07:42.633271 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:07:42.634514 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:07:42.635494 systemd[1]: Stopped target timers.target. Oct 2 19:07:42.635799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:07:42.635978 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:07:42.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.641665 systemd[1]: Stopped target initrd.target. Oct 2 19:07:42.664389 kernel: audit: type=1131 audit(1696273662.641:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.664431 kernel: audit: type=1131 audit(1696273662.654:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.664446 kernel: audit: type=1131 audit(1696273662.658:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.645092 systemd[1]: Stopped target basic.target. Oct 2 19:07:42.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.646019 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:07:42.647004 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:07:42.647963 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:07:42.648928 systemd[1]: Stopped target remote-fs.target. Oct 2 19:07:42.649888 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:07:42.650906 systemd[1]: Stopped target sysinit.target. Oct 2 19:07:42.651873 systemd[1]: Stopped target local-fs.target. Oct 2 19:07:42.652784 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:07:42.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.653741 systemd[1]: Stopped target swap.target. Oct 2 19:07:42.654603 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:07:42.654769 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:07:42.655806 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:07:42.659227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:07:42.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.723507 iscsid[705]: iscsid shutting down. Oct 2 19:07:42.729189 ignition[856]: INFO : Ignition 2.14.0 Oct 2 19:07:42.729189 ignition[856]: INFO : Stage: umount Oct 2 19:07:42.729189 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:07:42.729189 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:07:42.729189 ignition[856]: INFO : umount: umount passed Oct 2 19:07:42.729189 ignition[856]: INFO : Ignition finished successfully Oct 2 19:07:42.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.659367 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:07:42.659719 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:07:42.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.659830 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:07:42.665285 systemd[1]: Stopped target paths.target. Oct 2 19:07:42.670654 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:07:42.677591 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:07:42.689913 systemd[1]: Stopped target slices.target. Oct 2 19:07:42.694886 systemd[1]: Stopped target sockets.target. Oct 2 19:07:42.697628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:07:42.697813 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:07:42.699019 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:07:42.699140 systemd[1]: Stopped ignition-files.service. Oct 2 19:07:42.703358 systemd[1]: Stopping ignition-mount.service... Oct 2 19:07:42.712421 systemd[1]: Stopping iscsid.service... Oct 2 19:07:42.716884 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:07:42.717145 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:07:42.724589 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:07:42.728690 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:07:42.728903 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:07:42.729669 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:07:42.729796 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:07:42.736665 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:07:42.736795 systemd[1]: Stopped iscsid.service. Oct 2 19:07:42.739013 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:07:42.739097 systemd[1]: Stopped ignition-mount.service. Oct 2 19:07:42.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.748252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:07:42.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.748361 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:07:42.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.750019 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:07:42.750062 systemd[1]: Closed iscsid.socket. Oct 2 19:07:42.750125 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:07:42.750179 systemd[1]: Stopped ignition-disks.service. Oct 2 19:07:42.750247 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:07:42.750286 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:07:42.750345 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:07:42.750385 systemd[1]: Stopped ignition-setup.service. Oct 2 19:07:42.750546 systemd[1]: Stopping iscsiuio.service... Oct 2 19:07:42.756313 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:07:42.759969 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:07:42.760121 systemd[1]: Stopped iscsiuio.service. Oct 2 19:07:42.760532 systemd[1]: Stopped target network.target. Oct 2 19:07:42.766463 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:07:42.766527 systemd[1]: Closed iscsiuio.socket. Oct 2 19:07:42.772092 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:07:42.779142 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:07:42.802572 systemd-networkd[695]: eth0: DHCPv6 lease lost Oct 2 19:07:42.804177 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:07:42.838000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:07:42.805954 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:07:42.812628 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:07:42.812741 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:07:42.813896 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:07:42.813948 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:07:42.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.819239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:07:42.819338 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:07:42.820557 systemd[1]: Stopping network-cleanup.service... Oct 2 19:07:42.820888 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:07:42.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.820962 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:07:42.821361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:07:42.821428 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:07:42.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.853372 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:07:42.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.853465 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:07:42.899000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:07:42.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.855922 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:07:42.859001 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:07:42.859643 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:07:42.859770 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:07:42.867308 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:07:42.867518 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:07:42.879919 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:07:42.880080 systemd[1]: Stopped network-cleanup.service. Oct 2 19:07:42.884894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:07:42.884974 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:07:42.886816 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:07:42.886865 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:07:42.888699 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:07:42.888761 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:07:42.893257 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:07:42.893590 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:07:42.898735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:07:42.898819 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:07:42.935312 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:07:42.941255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:07:42.941374 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:07:42.948089 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:07:42.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.948210 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:07:42.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:42.957557 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:07:42.969263 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:07:42.993062 systemd[1]: Switching root. Oct 2 19:07:43.050251 systemd-journald[197]: Journal stopped Oct 2 19:07:50.999305 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Oct 2 19:07:50.999353 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:07:50.999365 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:07:50.999379 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:07:50.999400 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:07:50.999410 kernel: SELinux: policy capability open_perms=1 Oct 2 19:07:50.999420 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:07:50.999429 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:07:50.999442 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:07:50.999451 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:07:50.999464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:07:50.999476 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:07:50.999486 systemd[1]: Successfully loaded SELinux policy in 143.351ms. Oct 2 19:07:50.999503 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.663ms. Oct 2 19:07:50.999514 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:07:50.999525 systemd[1]: Detected virtualization kvm. Oct 2 19:07:50.999537 systemd[1]: Detected architecture x86-64. Oct 2 19:07:50.999550 systemd[1]: Detected first boot. Oct 2 19:07:50.999561 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:07:50.999571 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:07:50.999582 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:07:50.999592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:07:50.999603 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:07:50.999615 kernel: kauditd_printk_skb: 38 callbacks suppressed Oct 2 19:07:50.999627 kernel: audit: type=1334 audit(1696273670.884:84): prog-id=12 op=LOAD Oct 2 19:07:50.999636 kernel: audit: type=1334 audit(1696273670.884:85): prog-id=3 op=UNLOAD Oct 2 19:07:50.999646 kernel: audit: type=1334 audit(1696273670.885:86): prog-id=13 op=LOAD Oct 2 19:07:50.999657 kernel: audit: type=1334 audit(1696273670.886:87): prog-id=14 op=LOAD Oct 2 19:07:50.999666 kernel: audit: type=1334 audit(1696273670.886:88): prog-id=4 op=UNLOAD Oct 2 19:07:50.999675 kernel: audit: type=1334 audit(1696273670.886:89): prog-id=5 op=UNLOAD Oct 2 19:07:50.999684 kernel: audit: type=1334 audit(1696273670.888:90): prog-id=15 op=LOAD Oct 2 19:07:50.999694 kernel: audit: type=1334 audit(1696273670.888:91): prog-id=12 op=UNLOAD Oct 2 19:07:50.999704 kernel: audit: type=1334 audit(1696273670.890:92): prog-id=16 op=LOAD Oct 2 19:07:50.999713 kernel: audit: type=1334 audit(1696273670.891:93): prog-id=17 op=LOAD Oct 2 19:07:50.999722 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:07:50.999732 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:07:50.999742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:07:50.999752 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:07:50.999762 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:07:50.999772 systemd[1]: Created slice system-getty.slice. Oct 2 19:07:50.999783 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:07:50.999793 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:07:50.999803 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:07:50.999813 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:07:50.999823 systemd[1]: Created slice user.slice. Oct 2 19:07:50.999833 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:07:50.999842 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:07:50.999853 systemd[1]: Set up automount boot.automount. Oct 2 19:07:50.999863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:07:50.999874 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:07:50.999884 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:07:50.999894 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:07:50.999906 systemd[1]: Reached target integritysetup.target. Oct 2 19:07:50.999915 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:07:50.999925 systemd[1]: Reached target remote-fs.target. Oct 2 19:07:50.999991 systemd[1]: Reached target slices.target. Oct 2 19:07:51.000002 systemd[1]: Reached target swap.target. Oct 2 19:07:51.000015 systemd[1]: Reached target torcx.target. Oct 2 19:07:51.000026 systemd[1]: Reached target veritysetup.target. Oct 2 19:07:51.000036 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:07:51.000045 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:07:51.000055 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:07:51.000065 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:07:51.000075 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:07:51.000085 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:07:51.000094 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:07:51.000106 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:07:51.000116 systemd[1]: Mounting media.mount... Oct 2 19:07:51.000127 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:07:51.000139 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:07:51.000149 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:07:51.000158 systemd[1]: Mounting tmp.mount... Oct 2 19:07:51.000168 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:07:51.000178 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:07:51.000188 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:07:51.000199 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:07:51.000209 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:07:51.000219 systemd[1]: Starting modprobe@drm.service... Oct 2 19:07:51.000229 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:07:51.000239 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:07:51.000254 systemd[1]: Starting modprobe@loop.service... Oct 2 19:07:51.000264 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:07:51.000275 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:07:51.000284 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:07:51.000297 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:07:51.000307 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:07:51.000316 systemd[1]: Stopped systemd-journald.service. Oct 2 19:07:51.000325 kernel: loop: module loaded Oct 2 19:07:51.000335 systemd[1]: Starting systemd-journald.service... Oct 2 19:07:51.000344 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:07:51.000354 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:07:51.000365 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:07:51.000375 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:07:51.000385 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:07:51.000396 systemd[1]: Stopped verity-setup.service. Oct 2 19:07:51.000406 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:07:51.000416 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:07:51.000425 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:07:51.000435 systemd[1]: Mounted media.mount. Oct 2 19:07:51.000449 systemd-journald[968]: Journal started Oct 2 19:07:51.000488 systemd-journald[968]: Runtime Journal (/run/log/journal/4920c2c570114e279d36d5f5e1a8ec89) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:07:43.318000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:07:43.706000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:07:43.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:07:43.713000 audit: BPF prog-id=10 op=LOAD Oct 2 19:07:43.715000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:07:43.724000 audit: BPF prog-id=11 op=LOAD Oct 2 19:07:43.726000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:07:50.884000 audit: BPF prog-id=12 op=LOAD Oct 2 19:07:50.884000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:07:50.885000 audit: BPF prog-id=13 op=LOAD Oct 2 19:07:50.886000 audit: BPF prog-id=14 op=LOAD Oct 2 19:07:50.886000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:07:50.886000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:07:50.888000 audit: BPF prog-id=15 op=LOAD Oct 2 19:07:50.888000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:07:50.890000 audit: BPF prog-id=16 op=LOAD Oct 2 19:07:50.891000 audit: BPF prog-id=17 op=LOAD Oct 2 19:07:50.891000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:07:50.891000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:07:50.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.906000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:07:50.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.975000 audit: BPF prog-id=18 op=LOAD Oct 2 19:07:50.975000 audit: BPF prog-id=19 op=LOAD Oct 2 19:07:50.975000 audit: BPF prog-id=20 op=LOAD Oct 2 19:07:50.975000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:07:50.975000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:07:50.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:50.997000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:07:50.997000 audit[968]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffebbd27b30 a2=4000 a3=7ffebbd27bcc items=0 ppid=1 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:50.997000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:07:44.154015 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:07:50.883796 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:07:51.001380 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:07:44.158563 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:07:50.883808 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:07:44.158595 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:07:50.892980 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:07:44.158645 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:07:44.158659 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:07:44.158718 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:07:44.158737 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:07:44.159163 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:07:44.159210 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:07:44.159222 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:07:44.159870 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:07:44.159915 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:07:44.159952 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:07:44.159973 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:07:44.159996 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:07:44.160016 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:07:50.520057 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:07:50.520645 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:07:50.520843 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:07:50.521112 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:07:50.521446 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:07:50.521833 /usr/lib/systemd/system-generators/torcx-generator[895]: time="2023-10-02T19:07:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:07:51.004022 systemd[1]: Started systemd-journald.service. Oct 2 19:07:51.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.003822 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:07:51.004488 systemd[1]: Mounted tmp.mount. Oct 2 19:07:51.005194 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:07:51.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.005951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:07:51.006051 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:07:51.006809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:07:51.006979 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:07:51.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.008197 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:07:51.008339 systemd[1]: Finished modprobe@drm.service. Oct 2 19:07:51.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.011962 kernel: fuse: init (API version 7.34) Oct 2 19:07:51.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.012047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:07:51.012251 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:07:51.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.013256 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:07:51.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.013434 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:07:51.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.014166 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:07:51.014330 systemd[1]: Finished modprobe@loop.service. Oct 2 19:07:51.015179 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:07:51.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.015980 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:07:51.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.016904 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:07:51.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.017786 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:07:51.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.018890 systemd[1]: Reached target network-pre.target. Oct 2 19:07:51.020730 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:07:51.022737 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:07:51.023487 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:07:51.027062 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:07:51.028661 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:07:51.029351 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:07:51.030515 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:07:51.031148 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:07:51.043228 systemd-journald[968]: Time spent on flushing to /var/log/journal/4920c2c570114e279d36d5f5e1a8ec89 is 19.181ms for 1096 entries. Oct 2 19:07:51.043228 systemd-journald[968]: System Journal (/var/log/journal/4920c2c570114e279d36d5f5e1a8ec89) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:07:51.170355 systemd-journald[968]: Received client request to flush runtime journal. Oct 2 19:07:51.206458 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:07:51.225812 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:07:51.229873 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:07:51.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.231175 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:07:51.232049 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:07:51.233264 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:07:51.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.234477 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:07:51.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.235699 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:07:51.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.236661 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:07:51.239249 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:07:51.242925 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:07:51.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.246730 udevadm[1000]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:07:51.843859 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:07:51.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.845000 audit: BPF prog-id=21 op=LOAD Oct 2 19:07:51.848000 audit: BPF prog-id=22 op=LOAD Oct 2 19:07:51.848000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:07:51.848000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:07:51.849769 systemd[1]: Starting systemd-udevd.service... Oct 2 19:07:51.865423 systemd-udevd[1001]: Using default interface naming scheme 'v252'. Oct 2 19:07:51.923790 systemd[1]: Started systemd-udevd.service. Oct 2 19:07:51.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.930000 audit: BPF prog-id=23 op=LOAD Oct 2 19:07:51.931734 systemd[1]: Starting systemd-networkd.service... Oct 2 19:07:51.935000 audit: BPF prog-id=24 op=LOAD Oct 2 19:07:51.935000 audit: BPF prog-id=25 op=LOAD Oct 2 19:07:51.935000 audit: BPF prog-id=26 op=LOAD Oct 2 19:07:51.937077 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:07:51.947434 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:07:51.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:51.966798 systemd[1]: Started systemd-userdbd.service. Oct 2 19:07:51.984016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:07:51.987969 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:07:52.003592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:07:52.004000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:07:52.004000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fd9d5193b0 a1=32194 a2=7f76b2fd7bc5 a3=5 items=106 ppid=1001 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:52.004000 audit: CWD cwd="/" Oct 2 19:07:52.004000 audit: PATH item=0 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=1 name=(null) inode=16436 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=2 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=3 name=(null) inode=16437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=4 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=5 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=6 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=7 name=(null) inode=16439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=8 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=9 name=(null) inode=16440 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=10 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=11 name=(null) inode=16441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=12 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=13 name=(null) inode=16442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=14 name=(null) inode=16438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=15 name=(null) inode=16443 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=16 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=17 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=18 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=19 name=(null) inode=16445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=20 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=21 name=(null) inode=16446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=22 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=23 name=(null) inode=16447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=24 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=25 name=(null) inode=16448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=26 name=(null) inode=16444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=27 name=(null) inode=16449 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=28 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=29 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=30 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=31 name=(null) inode=16451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=32 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=33 name=(null) inode=16452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=34 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=35 name=(null) inode=16453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=36 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=37 name=(null) inode=16454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=38 name=(null) inode=16450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=39 name=(null) inode=16455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=40 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=41 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=42 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=43 name=(null) inode=16457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=44 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=45 name=(null) inode=16458 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=46 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=47 name=(null) inode=16459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=48 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=49 name=(null) inode=16460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=50 name=(null) inode=16456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=51 name=(null) inode=16461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=53 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=54 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=55 name=(null) inode=16463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=56 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=57 name=(null) inode=16464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=58 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=59 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=60 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=61 name=(null) inode=16466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=62 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=63 name=(null) inode=16467 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=64 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=65 name=(null) inode=16468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=66 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=67 name=(null) inode=16469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=68 name=(null) inode=16465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=69 name=(null) inode=16470 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=70 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=71 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=72 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=73 name=(null) inode=16472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=74 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=75 name=(null) inode=16473 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=76 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=77 name=(null) inode=16474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=78 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=79 name=(null) inode=16475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=80 name=(null) inode=16471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=81 name=(null) inode=16476 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=82 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=83 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=84 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=85 name=(null) inode=16478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=86 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=87 name=(null) inode=16479 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=88 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=89 name=(null) inode=16480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=90 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=91 name=(null) inode=16481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=92 name=(null) inode=16477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=93 name=(null) inode=16482 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=94 name=(null) inode=16462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=95 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=96 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=97 name=(null) inode=16484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=98 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=99 name=(null) inode=16485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=100 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=101 name=(null) inode=16486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=102 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=103 name=(null) inode=16487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=104 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PATH item=105 name=(null) inode=16488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:07:52.004000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:07:52.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.023786 systemd-networkd[1020]: lo: Link UP Oct 2 19:07:52.023790 systemd-networkd[1020]: lo: Gained carrier Oct 2 19:07:52.024231 systemd-networkd[1020]: Enumeration completed Oct 2 19:07:52.024309 systemd[1]: Started systemd-networkd.service. Oct 2 19:07:52.024312 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:07:52.025922 systemd-networkd[1020]: eth0: Link UP Oct 2 19:07:52.025930 systemd-networkd[1020]: eth0: Gained carrier Oct 2 19:07:52.038958 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:07:52.039087 systemd-networkd[1020]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:07:52.050962 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:07:52.055966 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:07:52.113969 kernel: kvm: Nested Virtualization enabled Oct 2 19:07:52.114125 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:07:52.131956 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:07:52.151466 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:07:52.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.153947 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:07:52.176989 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:07:52.203785 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:07:52.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.204587 systemd[1]: Reached target cryptsetup.target. Oct 2 19:07:52.206237 systemd[1]: Starting lvm2-activation.service... Oct 2 19:07:52.209830 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:07:52.239124 systemd[1]: Finished lvm2-activation.service. Oct 2 19:07:52.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.240825 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:07:52.241372 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:07:52.241396 systemd[1]: Reached target local-fs.target. Oct 2 19:07:52.241918 systemd[1]: Reached target machines.target. Oct 2 19:07:52.243404 systemd[1]: Starting ldconfig.service... Oct 2 19:07:52.244059 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:07:52.244105 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:07:52.244805 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:07:52.246554 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:07:52.248371 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:07:52.249316 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:07:52.249353 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:07:52.250589 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:07:52.251877 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1039 (bootctl) Oct 2 19:07:52.253196 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:07:52.258489 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:07:52.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.314242 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:07:52.317630 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:07:52.321468 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:07:52.340317 systemd-fsck[1047]: fsck.fat 4.2 (2021-01-31) Oct 2 19:07:52.340317 systemd-fsck[1047]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:07:52.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:52.342291 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:07:52.344647 systemd[1]: Mounting boot.mount... Oct 2 19:07:53.208216 systemd[1]: Mounted boot.mount. Oct 2 19:07:53.230025 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:07:53.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.252043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:07:53.252729 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:07:53.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.295128 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:07:53.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.297385 systemd[1]: Starting audit-rules.service... Oct 2 19:07:53.323007 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:07:53.325509 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:07:53.327000 audit: BPF prog-id=27 op=LOAD Oct 2 19:07:53.329455 systemd[1]: Starting systemd-resolved.service... Oct 2 19:07:53.330000 audit: BPF prog-id=28 op=LOAD Oct 2 19:07:53.333580 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:07:53.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.337056 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:07:53.339029 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:07:53.340814 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:07:53.351000 audit[1062]: SYSTEM_BOOT pid=1062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.353840 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:07:53.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.362079 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:07:53.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:53.362000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:07:53.362000 audit[1071]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd7d472d10 a2=420 a3=0 items=0 ppid=1051 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:53.362000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:07:53.364019 augenrules[1071]: No rules Oct 2 19:07:53.364481 systemd[1]: Finished audit-rules.service. Oct 2 19:07:53.409739 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:07:53.410766 systemd[1]: Reached target time-set.target. Oct 2 19:07:53.899834 systemd-timesyncd[1061]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:07:53.899951 systemd-timesyncd[1061]: Initial clock synchronization to Mon 2023-10-02 19:07:53.899616 UTC. Oct 2 19:07:53.918504 systemd-resolved[1060]: Positive Trust Anchors: Oct 2 19:07:53.918516 systemd-resolved[1060]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:07:53.918550 systemd-resolved[1060]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:07:53.942604 systemd-resolved[1060]: Defaulting to hostname 'linux'. Oct 2 19:07:53.944603 systemd[1]: Started systemd-resolved.service. Oct 2 19:07:53.945653 systemd[1]: Reached target network.target. Oct 2 19:07:53.946322 systemd[1]: Reached target nss-lookup.target. Oct 2 19:07:54.011813 ldconfig[1038]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:07:54.019553 systemd[1]: Finished ldconfig.service. Oct 2 19:07:54.021773 systemd[1]: Starting systemd-update-done.service... Oct 2 19:07:54.028653 systemd[1]: Finished systemd-update-done.service. Oct 2 19:07:54.029824 systemd[1]: Reached target sysinit.target. Oct 2 19:07:54.030619 systemd[1]: Started motdgen.path. Oct 2 19:07:54.031498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:07:54.032759 systemd[1]: Started logrotate.timer. Oct 2 19:07:54.033562 systemd[1]: Started mdadm.timer. Oct 2 19:07:54.034183 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:07:54.034907 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:07:54.034930 systemd[1]: Reached target paths.target. Oct 2 19:07:54.035530 systemd[1]: Reached target timers.target. Oct 2 19:07:54.036562 systemd[1]: Listening on dbus.socket. Oct 2 19:07:54.038565 systemd[1]: Starting docker.socket... Oct 2 19:07:54.044074 systemd[1]: Listening on sshd.socket. Oct 2 19:07:54.044887 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:07:54.045365 systemd[1]: Listening on docker.socket. Oct 2 19:07:54.046110 systemd[1]: Reached target sockets.target. Oct 2 19:07:54.046881 systemd[1]: Reached target basic.target. Oct 2 19:07:54.047623 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:07:54.047648 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:07:54.049026 systemd[1]: Starting containerd.service... Oct 2 19:07:54.050821 systemd[1]: Starting dbus.service... Oct 2 19:07:54.052541 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:07:54.054460 systemd[1]: Starting extend-filesystems.service... Oct 2 19:07:54.055351 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:07:54.056908 systemd[1]: Starting motdgen.service... Oct 2 19:07:54.058674 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:07:54.060563 systemd[1]: Starting prepare-critools.service... Oct 2 19:07:54.062490 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:07:54.066700 systemd[1]: Starting sshd-keygen.service... Oct 2 19:07:54.070721 systemd[1]: Starting systemd-logind.service... Oct 2 19:07:54.071611 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:07:54.071691 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:07:54.072550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:07:54.075345 systemd[1]: Starting update-engine.service... Oct 2 19:07:54.077727 jq[1082]: false Oct 2 19:07:54.077783 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:07:54.082162 jq[1100]: true Oct 2 19:07:54.082711 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:07:54.083001 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:07:54.083459 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:07:54.083635 systemd[1]: Finished motdgen.service. Oct 2 19:07:54.085466 extend-filesystems[1083]: Found sr0 Oct 2 19:07:54.086548 extend-filesystems[1083]: Found vda Oct 2 19:07:54.087404 extend-filesystems[1083]: Found vda1 Oct 2 19:07:54.087404 extend-filesystems[1083]: Found vda2 Oct 2 19:07:54.089687 extend-filesystems[1083]: Found vda3 Oct 2 19:07:54.089687 extend-filesystems[1083]: Found usr Oct 2 19:07:54.095500 extend-filesystems[1083]: Found vda4 Oct 2 19:07:54.095500 extend-filesystems[1083]: Found vda6 Oct 2 19:07:54.095500 extend-filesystems[1083]: Found vda7 Oct 2 19:07:54.095500 extend-filesystems[1083]: Found vda9 Oct 2 19:07:54.091427 dbus-daemon[1081]: [system] SELinux support is enabled Oct 2 19:07:54.089696 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:07:54.099995 extend-filesystems[1083]: Checking size of /dev/vda9 Oct 2 19:07:54.089941 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:07:54.109520 tar[1103]: ./ Oct 2 19:07:54.109520 tar[1103]: ./loopback Oct 2 19:07:54.109720 extend-filesystems[1083]: Old size kept for /dev/vda9 Oct 2 19:07:54.093489 systemd[1]: Started dbus.service. Oct 2 19:07:54.111675 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:07:54.111986 systemd[1]: Finished extend-filesystems.service. Oct 2 19:07:54.119822 tar[1104]: crictl Oct 2 19:07:54.118801 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:07:54.118824 systemd[1]: Reached target system-config.target. Oct 2 19:07:54.119630 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:07:54.119642 systemd[1]: Reached target user-config.target. Oct 2 19:07:54.122440 jq[1112]: true Oct 2 19:07:54.196839 env[1113]: time="2023-10-02T19:07:54.196412497Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:07:54.199020 update_engine[1098]: I1002 19:07:54.198307 1098 main.cc:92] Flatcar Update Engine starting Oct 2 19:07:54.204380 systemd[1]: Started update-engine.service. Oct 2 19:07:54.206751 systemd-logind[1094]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:07:54.208257 update_engine[1098]: I1002 19:07:54.205646 1098 update_check_scheduler.cc:74] Next update check in 2m45s Oct 2 19:07:54.206773 systemd-logind[1094]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:07:54.207041 systemd-logind[1094]: New seat seat0. Oct 2 19:07:54.207361 systemd[1]: Started locksmithd.service. Oct 2 19:07:54.226210 systemd[1]: Started systemd-logind.service. Oct 2 19:07:54.235209 bash[1134]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:07:54.236006 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:07:54.240852 tar[1103]: ./bandwidth Oct 2 19:07:54.291099 env[1113]: time="2023-10-02T19:07:54.291031241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:07:54.291267 env[1113]: time="2023-10-02T19:07:54.291226848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293614 env[1113]: time="2023-10-02T19:07:54.293572607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293614 env[1113]: time="2023-10-02T19:07:54.293607412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293828 env[1113]: time="2023-10-02T19:07:54.293800785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293828 env[1113]: time="2023-10-02T19:07:54.293821374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293915 env[1113]: time="2023-10-02T19:07:54.293833286Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:07:54.293915 env[1113]: time="2023-10-02T19:07:54.293841962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.293915 env[1113]: time="2023-10-02T19:07:54.293902075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.294146 env[1113]: time="2023-10-02T19:07:54.294119032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:07:54.294248 env[1113]: time="2023-10-02T19:07:54.294221674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:07:54.294248 env[1113]: time="2023-10-02T19:07:54.294239898Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:07:54.294345 env[1113]: time="2023-10-02T19:07:54.294278421Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:07:54.294345 env[1113]: time="2023-10-02T19:07:54.294288880Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:07:54.306674 env[1113]: time="2023-10-02T19:07:54.306622140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:07:54.306674 env[1113]: time="2023-10-02T19:07:54.306672484Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:07:54.306674 env[1113]: time="2023-10-02T19:07:54.306685328Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306720494Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306733709Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306759357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306770337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306783953Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306795535Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306807126Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306834818Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.306913 env[1113]: time="2023-10-02T19:07:54.306847362Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:07:54.307082 env[1113]: time="2023-10-02T19:07:54.306972156Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:07:54.307082 env[1113]: time="2023-10-02T19:07:54.307059209Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:07:54.307303 env[1113]: time="2023-10-02T19:07:54.307278941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:07:54.307351 env[1113]: time="2023-10-02T19:07:54.307308567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307351 env[1113]: time="2023-10-02T19:07:54.307321651Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:07:54.307391 env[1113]: time="2023-10-02T19:07:54.307368960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307391 env[1113]: time="2023-10-02T19:07:54.307381504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307428 env[1113]: time="2023-10-02T19:07:54.307392434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307428 env[1113]: time="2023-10-02T19:07:54.307402663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307428 env[1113]: time="2023-10-02T19:07:54.307413784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307428 env[1113]: time="2023-10-02T19:07:54.307424514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307502 env[1113]: time="2023-10-02T19:07:54.307434603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307502 env[1113]: time="2023-10-02T19:07:54.307444512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307502 env[1113]: time="2023-10-02T19:07:54.307457266Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:07:54.307592 env[1113]: time="2023-10-02T19:07:54.307566671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307592 env[1113]: time="2023-10-02T19:07:54.307586167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307653 env[1113]: time="2023-10-02T19:07:54.307597038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307653 env[1113]: time="2023-10-02T19:07:54.307607588Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:07:54.307653 env[1113]: time="2023-10-02T19:07:54.307621724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:07:54.307653 env[1113]: time="2023-10-02T19:07:54.307631923Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:07:54.307653 env[1113]: time="2023-10-02T19:07:54.307652101Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:07:54.307766 env[1113]: time="2023-10-02T19:07:54.307685063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:07:54.307929 env[1113]: time="2023-10-02T19:07:54.307870962Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:07:54.307929 env[1113]: time="2023-10-02T19:07:54.307927718Z" level=info msg="Connect containerd service" Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.307969907Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.308414010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.308630286Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.308662276Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.308703793Z" level=info msg="containerd successfully booted in 0.113387s" Oct 2 19:07:54.310700 env[1113]: time="2023-10-02T19:07:54.310020292Z" level=info msg="Start subscribing containerd event" Oct 2 19:07:54.308828 systemd[1]: Started containerd.service. Oct 2 19:07:54.310936 tar[1103]: ./ptp Oct 2 19:07:54.316259 env[1113]: time="2023-10-02T19:07:54.316210892Z" level=info msg="Start recovering state" Oct 2 19:07:54.316396 env[1113]: time="2023-10-02T19:07:54.316318033Z" level=info msg="Start event monitor" Oct 2 19:07:54.316396 env[1113]: time="2023-10-02T19:07:54.316354962Z" level=info msg="Start snapshots syncer" Oct 2 19:07:54.316396 env[1113]: time="2023-10-02T19:07:54.316364660Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:07:54.316396 env[1113]: time="2023-10-02T19:07:54.316371934Z" level=info msg="Start streaming server" Oct 2 19:07:54.335618 systemd[1]: Created slice system-sshd.slice. Oct 2 19:07:54.360519 tar[1103]: ./vlan Oct 2 19:07:54.398400 locksmithd[1137]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:07:54.406122 tar[1103]: ./host-device Oct 2 19:07:54.454296 tar[1103]: ./tuning Oct 2 19:07:54.489642 tar[1103]: ./vrf Oct 2 19:07:54.524402 tar[1103]: ./sbr Oct 2 19:07:54.560301 tar[1103]: ./tap Oct 2 19:07:54.568938 systemd-networkd[1020]: eth0: Gained IPv6LL Oct 2 19:07:54.604271 tar[1103]: ./dhcp Oct 2 19:07:54.722795 systemd[1]: Finished prepare-critools.service. Oct 2 19:07:54.732225 tar[1103]: ./static Oct 2 19:07:54.754397 tar[1103]: ./firewall Oct 2 19:07:54.788331 tar[1103]: ./macvlan Oct 2 19:07:54.818994 tar[1103]: ./dummy Oct 2 19:07:54.849582 tar[1103]: ./bridge Oct 2 19:07:54.884280 tar[1103]: ./ipvlan Oct 2 19:07:54.915722 tar[1103]: ./portmap Oct 2 19:07:54.927905 sshd_keygen[1105]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:07:54.946483 tar[1103]: ./host-local Oct 2 19:07:54.947670 systemd[1]: Finished sshd-keygen.service. Oct 2 19:07:54.949919 systemd[1]: Starting issuegen.service... Oct 2 19:07:54.951303 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:47080.service. Oct 2 19:07:54.955242 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:07:54.955468 systemd[1]: Finished issuegen.service. Oct 2 19:07:54.958469 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:07:54.965602 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:07:54.967931 systemd[1]: Started getty@tty1.service. Oct 2 19:07:54.970024 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:07:54.971033 systemd[1]: Reached target getty.target. Oct 2 19:07:55.024696 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:07:55.026845 systemd[1]: Reached target multi-user.target. Oct 2 19:07:55.029036 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:07:55.036172 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:07:55.036338 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:07:55.037240 systemd[1]: Startup finished in 899ms (kernel) + 10.174s (initrd) + 11.418s (userspace) = 22.492s. Oct 2 19:07:55.150509 sshd[1155]: Accepted publickey for core from 10.0.0.1 port 47080 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.151693 sshd[1155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.162476 systemd[1]: Created slice user-500.slice. Oct 2 19:07:55.164015 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:07:55.168911 systemd-logind[1094]: New session 1 of user core. Oct 2 19:07:55.176449 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:07:55.180322 systemd[1]: Starting user@500.service... Oct 2 19:07:55.183183 (systemd)[1166]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.260256 systemd[1166]: Queued start job for default target default.target. Oct 2 19:07:55.260730 systemd[1166]: Reached target paths.target. Oct 2 19:07:55.260767 systemd[1166]: Reached target sockets.target. Oct 2 19:07:55.260778 systemd[1166]: Reached target timers.target. Oct 2 19:07:55.260788 systemd[1166]: Reached target basic.target. Oct 2 19:07:55.260835 systemd[1166]: Reached target default.target. Oct 2 19:07:55.260856 systemd[1166]: Startup finished in 71ms. Oct 2 19:07:55.260987 systemd[1]: Started user@500.service. Oct 2 19:07:55.262053 systemd[1]: Started session-1.scope. Oct 2 19:07:55.315704 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:47090.service. Oct 2 19:07:55.348579 sshd[1175]: Accepted publickey for core from 10.0.0.1 port 47090 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.349798 sshd[1175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.354540 systemd[1]: Started session-2.scope. Oct 2 19:07:55.355300 systemd-logind[1094]: New session 2 of user core. Oct 2 19:07:55.411134 sshd[1175]: pam_unix(sshd:session): session closed for user core Oct 2 19:07:55.414127 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:47090.service: Deactivated successfully. Oct 2 19:07:55.414846 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:07:55.415398 systemd-logind[1094]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:07:55.417067 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:47094.service. Oct 2 19:07:55.417884 systemd-logind[1094]: Removed session 2. Oct 2 19:07:55.450308 sshd[1181]: Accepted publickey for core from 10.0.0.1 port 47094 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.451665 sshd[1181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.455992 systemd-logind[1094]: New session 3 of user core. Oct 2 19:07:55.456859 systemd[1]: Started session-3.scope. Oct 2 19:07:55.506766 sshd[1181]: pam_unix(sshd:session): session closed for user core Oct 2 19:07:55.509900 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:47094.service: Deactivated successfully. Oct 2 19:07:55.510460 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:07:55.511020 systemd-logind[1094]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:07:55.511851 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:47102.service. Oct 2 19:07:55.512458 systemd-logind[1094]: Removed session 3. Oct 2 19:07:55.545595 sshd[1187]: Accepted publickey for core from 10.0.0.1 port 47102 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.546708 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.550077 systemd-logind[1094]: New session 4 of user core. Oct 2 19:07:55.550977 systemd[1]: Started session-4.scope. Oct 2 19:07:55.604190 sshd[1187]: pam_unix(sshd:session): session closed for user core Oct 2 19:07:55.606966 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:47102.service: Deactivated successfully. Oct 2 19:07:55.607484 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:07:55.607937 systemd-logind[1094]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:07:55.608661 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:47110.service. Oct 2 19:07:55.609234 systemd-logind[1094]: Removed session 4. Oct 2 19:07:55.639485 sshd[1193]: Accepted publickey for core from 10.0.0.1 port 47110 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.640672 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.644406 systemd-logind[1094]: New session 5 of user core. Oct 2 19:07:55.645334 systemd[1]: Started session-5.scope. Oct 2 19:07:55.704780 sudo[1196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:07:55.704948 sudo[1196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:07:55.713978 dbus-daemon[1081]: Ѝ\xb94\xc3U: received setenforce notice (enforcing=827400064) Oct 2 19:07:55.715590 sudo[1196]: pam_unix(sudo:session): session closed for user root Oct 2 19:07:55.717378 sshd[1193]: pam_unix(sshd:session): session closed for user core Oct 2 19:07:55.719983 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:47110.service: Deactivated successfully. Oct 2 19:07:55.720526 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:07:55.720977 systemd-logind[1094]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:07:55.721798 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:47116.service. Oct 2 19:07:55.722435 systemd-logind[1094]: Removed session 5. Oct 2 19:07:55.753872 sshd[1200]: Accepted publickey for core from 10.0.0.1 port 47116 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.755148 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.758625 systemd-logind[1094]: New session 6 of user core. Oct 2 19:07:55.759337 systemd[1]: Started session-6.scope. Oct 2 19:07:55.809963 sudo[1204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:07:55.810118 sudo[1204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:07:55.812186 sudo[1204]: pam_unix(sudo:session): session closed for user root Oct 2 19:07:55.817152 sudo[1203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:07:55.817305 sudo[1203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:07:55.825335 systemd[1]: Stopping audit-rules.service... Oct 2 19:07:55.826000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:07:55.826000 audit[1207]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe45f52c20 a2=420 a3=0 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:55.826000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:07:55.827149 auditctl[1207]: No rules Oct 2 19:07:55.827336 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:07:55.827495 systemd[1]: Stopped audit-rules.service. Oct 2 19:07:55.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.828680 systemd[1]: Starting audit-rules.service... Oct 2 19:07:55.843338 augenrules[1224]: No rules Oct 2 19:07:55.843848 systemd[1]: Finished audit-rules.service. Oct 2 19:07:55.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.844845 sudo[1203]: pam_unix(sudo:session): session closed for user root Oct 2 19:07:55.844000 audit[1203]: USER_END pid=1203 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.844000 audit[1203]: CRED_DISP pid=1203 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.846002 sshd[1200]: pam_unix(sshd:session): session closed for user core Oct 2 19:07:55.846000 audit[1200]: USER_END pid=1200 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.846000 audit[1200]: CRED_DISP pid=1200 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.848706 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:47116.service: Deactivated successfully. Oct 2 19:07:55.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.46:22-10.0.0.1:47116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.849318 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:07:55.849752 systemd-logind[1094]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:07:55.850643 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:47128.service. Oct 2 19:07:55.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.46:22-10.0.0.1:47128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.851338 systemd-logind[1094]: Removed session 6. Oct 2 19:07:55.880000 audit[1230]: USER_ACCT pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.881286 sshd[1230]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:9/VFs6Vh3tGO5nFEXFlJ5Qu3Hg4nXNY9KvFKo+bazB4 Oct 2 19:07:55.881000 audit[1230]: CRED_ACQ pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.881000 audit[1230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58162f30 a2=3 a3=0 items=0 ppid=1 pid=1230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:55.881000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:07:55.882262 sshd[1230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:07:55.885599 systemd-logind[1094]: New session 7 of user core. Oct 2 19:07:55.886520 systemd[1]: Started session-7.scope. Oct 2 19:07:55.889000 audit[1230]: USER_START pid=1230 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.890000 audit[1233]: CRED_ACQ pid=1233 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:07:55.936000 audit[1234]: USER_ACCT pid=1234 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.936000 audit[1234]: CRED_REFR pid=1234 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:07:55.937448 sudo[1234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:07:55.937606 sudo[1234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:07:55.938000 audit[1234]: USER_START pid=1234 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:07:56.451532 systemd[1]: Reloading. Oct 2 19:07:56.514863 /usr/lib/systemd/system-generators/torcx-generator[1264]: time="2023-10-02T19:07:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:07:56.514896 /usr/lib/systemd/system-generators/torcx-generator[1264]: time="2023-10-02T19:07:56Z" level=info msg="torcx already run" Oct 2 19:07:56.581636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:07:56.581653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:07:56.599405 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.669959 kernel: kauditd_printk_skb: 202 callbacks suppressed Oct 2 19:07:56.670039 kernel: audit: type=1400 audit(1696273676.668:179): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.673998 kernel: audit: type=1400 audit(1696273676.668:180): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.674045 kernel: audit: type=1400 audit(1696273676.668:181): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.677892 kernel: audit: type=1400 audit(1696273676.668:182): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.677944 kernel: audit: type=1400 audit(1696273676.668:183): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.682030 kernel: audit: type=1400 audit(1696273676.668:184): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.682170 kernel: audit: type=1400 audit(1696273676.668:185): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684005 kernel: audit: type=1400 audit(1696273676.668:186): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.686631 kernel: audit: type=1400 audit(1696273676.668:187): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.687838 kernel: audit: type=1400 audit(1696273676.671:188): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit: BPF prog-id=34 op=LOAD Oct 2 19:07:56.671000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit: BPF prog-id=35 op=LOAD Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit: BPF prog-id=36 op=LOAD Oct 2 19:07:56.679000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:07:56.679000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.681000 audit: BPF prog-id=37 op=LOAD Oct 2 19:07:56.681000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.687000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.687000 audit: BPF prog-id=38 op=LOAD Oct 2 19:07:56.687000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit: BPF prog-id=39 op=LOAD Oct 2 19:07:56.689000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.689000 audit: BPF prog-id=40 op=LOAD Oct 2 19:07:56.690000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit: BPF prog-id=41 op=LOAD Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.690000 audit: BPF prog-id=42 op=LOAD Oct 2 19:07:56.690000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:07:56.690000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.691000 audit: BPF prog-id=43 op=LOAD Oct 2 19:07:56.691000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit: BPF prog-id=44 op=LOAD Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.692000 audit: BPF prog-id=45 op=LOAD Oct 2 19:07:56.692000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:07:56.692000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit: BPF prog-id=46 op=LOAD Oct 2 19:07:56.693000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit: BPF prog-id=47 op=LOAD Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:56.693000 audit: BPF prog-id=48 op=LOAD Oct 2 19:07:56.693000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:07:56.693000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:07:56.703425 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:07:57.837773 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:07:57.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:57.838340 systemd[1]: Reached target network-online.target. Oct 2 19:07:57.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:57.839767 systemd[1]: Started kubelet.service. Oct 2 19:07:57.851353 systemd[1]: Starting coreos-metadata.service... Oct 2 19:07:57.858633 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:07:57.858847 systemd[1]: Finished coreos-metadata.service. Oct 2 19:07:57.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:57.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:57.908365 kubelet[1306]: E1002 19:07:57.908241 1306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:07:57.910443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:07:57.910558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:07:57.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:07:58.252817 systemd[1]: Stopped kubelet.service. Oct 2 19:07:58.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:58.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:58.269980 systemd[1]: Reloading. Oct 2 19:07:58.326417 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2023-10-02T19:07:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:07:58.326452 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2023-10-02T19:07:58Z" level=info msg="torcx already run" Oct 2 19:07:58.386611 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:07:58.386633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:07:58.403485 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit: BPF prog-id=49 op=LOAD Oct 2 19:07:58.471000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit: BPF prog-id=50 op=LOAD Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit: BPF prog-id=51 op=LOAD Oct 2 19:07:58.472000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:07:58.472000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.472000 audit: BPF prog-id=52 op=LOAD Oct 2 19:07:58.472000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.474000 audit: BPF prog-id=53 op=LOAD Oct 2 19:07:58.474000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit: BPF prog-id=54 op=LOAD Oct 2 19:07:58.475000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit: BPF prog-id=55 op=LOAD Oct 2 19:07:58.476000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit: BPF prog-id=56 op=LOAD Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit: BPF prog-id=57 op=LOAD Oct 2 19:07:58.476000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:07:58.476000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit: BPF prog-id=58 op=LOAD Oct 2 19:07:58.477000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit: BPF prog-id=59 op=LOAD Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.477000 audit: BPF prog-id=60 op=LOAD Oct 2 19:07:58.477000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:07:58.477000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit: BPF prog-id=61 op=LOAD Oct 2 19:07:58.478000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit: BPF prog-id=62 op=LOAD Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.478000 audit: BPF prog-id=63 op=LOAD Oct 2 19:07:58.478000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:07:58.478000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:07:58.489469 systemd[1]: Started kubelet.service. Oct 2 19:07:58.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:07:58.538678 kubelet[1417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:07:58.538678 kubelet[1417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:07:58.538678 kubelet[1417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:07:58.539051 kubelet[1417]: I1002 19:07:58.538657 1417 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:07:58.848416 kubelet[1417]: I1002 19:07:58.848310 1417 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:07:58.848416 kubelet[1417]: I1002 19:07:58.848345 1417 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:07:58.848625 kubelet[1417]: I1002 19:07:58.848603 1417 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:07:58.850537 kubelet[1417]: I1002 19:07:58.850511 1417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:07:58.856574 kubelet[1417]: I1002 19:07:58.856551 1417 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:07:58.856762 kubelet[1417]: I1002 19:07:58.856726 1417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:07:58.856894 kubelet[1417]: I1002 19:07:58.856885 1417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:07:58.856989 kubelet[1417]: I1002 19:07:58.856909 1417 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:07:58.856989 kubelet[1417]: I1002 19:07:58.856918 1417 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:07:58.857041 kubelet[1417]: I1002 19:07:58.857025 1417 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:07:58.857132 kubelet[1417]: I1002 19:07:58.857122 1417 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:07:58.857164 kubelet[1417]: I1002 19:07:58.857138 1417 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:07:58.857164 kubelet[1417]: I1002 19:07:58.857164 1417 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:07:58.857222 kubelet[1417]: I1002 19:07:58.857176 1417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:07:58.857273 kubelet[1417]: E1002 19:07:58.857249 1417 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:58.857310 kubelet[1417]: E1002 19:07:58.857294 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:07:58.858033 kubelet[1417]: I1002 19:07:58.857973 1417 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:07:58.858356 kubelet[1417]: W1002 19:07:58.858322 1417 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:07:58.858882 kubelet[1417]: I1002 19:07:58.858851 1417 server.go:1232] "Started kubelet" Oct 2 19:07:58.858967 kubelet[1417]: I1002 19:07:58.858947 1417 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:07:58.859057 kubelet[1417]: I1002 19:07:58.859035 1417 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:07:58.859331 kubelet[1417]: I1002 19:07:58.859311 1417 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:07:58.859697 kubelet[1417]: E1002 19:07:58.859677 1417 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:07:58.859697 kubelet[1417]: I1002 19:07:58.859686 1417 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:07:58.859697 kubelet[1417]: E1002 19:07:58.859698 1417 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:07:58.860000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.860000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:07:58.860000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d2c240 a1=c0005d4828 a2=c000d2c210 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:58.860000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:07:58.861236 kubelet[1417]: I1002 19:07:58.861216 1417 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:07:58.860000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:58.860000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:07:58.860000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d242a0 a1=c0005d4840 a2=c000d2c2d0 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:58.860000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:07:58.861630 kubelet[1417]: I1002 19:07:58.861613 1417 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:07:58.861818 kubelet[1417]: I1002 19:07:58.861803 1417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:07:58.862757 kubelet[1417]: I1002 19:07:58.862669 1417 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:07:58.862891 kubelet[1417]: I1002 19:07:58.862875 1417 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:07:58.862995 kubelet[1417]: I1002 19:07:58.862983 1417 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:07:58.865495 kubelet[1417]: E1002 19:07:58.865472 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.46\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:07:58.865690 kubelet[1417]: W1002 19:07:58.865670 1417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.46" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:07:58.865832 kubelet[1417]: E1002 19:07:58.865816 1417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.46" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:07:58.866452 kubelet[1417]: E1002 19:07:58.866323 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7639929a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 858818202, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 858818202, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.866661 kubelet[1417]: W1002 19:07:58.866639 1417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:07:58.866725 kubelet[1417]: E1002 19:07:58.866667 1417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:07:58.866809 kubelet[1417]: W1002 19:07:58.866764 1417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:07:58.866809 kubelet[1417]: E1002 19:07:58.866788 1417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:07:58.867212 kubelet[1417]: E1002 19:07:58.867149 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7646e3bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 859690939, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 859690939, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.879919 kubelet[1417]: I1002 19:07:58.879897 1417 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:07:58.879919 kubelet[1417]: I1002 19:07:58.879911 1417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:07:58.879919 kubelet[1417]: I1002 19:07:58.879927 1417 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:07:58.885543 kubelet[1417]: E1002 19:07:58.885440 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771a5df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.46 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879270367, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879270367, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.886364 kubelet[1417]: E1002 19:07:58.886314 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771b6f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.46 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879274745, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879274745, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.887252 kubelet[1417]: E1002 19:07:58.887201 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771c6dd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.46 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879278813, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879278813, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.894000 audit[1431]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:58.894000 audit[1431]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd9b7b2710 a2=0 a3=7ffd9b7b26fc items=0 ppid=1417 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:58.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:07:58.895000 audit[1436]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:58.895000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd34703cd0 a2=0 a3=7ffd34703cbc items=0 ppid=1417 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:58.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:07:58.964244 kubelet[1417]: I1002 19:07:58.964207 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.46" Oct 2 19:07:58.965764 kubelet[1417]: E1002 19:07:58.965681 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771a5df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.46 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879270367, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 964157511, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771a5df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.965764 kubelet[1417]: E1002 19:07:58.965758 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.46" Oct 2 19:07:58.966588 kubelet[1417]: E1002 19:07:58.966532 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771b6f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.46 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879274745, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 964163262, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771b6f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.967568 kubelet[1417]: E1002 19:07:58.967501 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771c6dd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.46 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879278813, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 964166708, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771c6dd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:58.897000 audit[1438]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:58.897000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd90859100 a2=0 a3=7ffd908590ec items=0 ppid=1417 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:58.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:07:59.010000 audit[1443]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:59.010000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffef761e00 a2=0 a3=7fffef761dec items=0 ppid=1417 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:07:59.039937 kubelet[1417]: I1002 19:07:59.039892 1417 policy_none.go:49] "None policy: Start" Oct 2 19:07:59.040688 kubelet[1417]: I1002 19:07:59.040674 1417 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:07:59.040765 kubelet[1417]: I1002 19:07:59.040692 1417 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:07:59.067485 kubelet[1417]: E1002 19:07:59.067447 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.46\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:07:59.097000 audit[1448]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:59.097000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc128a10f0 a2=0 a3=7ffc128a10dc items=0 ppid=1417 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:07:59.098897 kubelet[1417]: I1002 19:07:59.098874 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:07:59.099159 systemd[1]: Created slice kubepods.slice. Oct 2 19:07:59.100080 kubelet[1417]: I1002 19:07:59.100053 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:07:59.100133 kubelet[1417]: I1002 19:07:59.100094 1417 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:07:59.100133 kubelet[1417]: I1002 19:07:59.100117 1417 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:07:59.100189 kubelet[1417]: E1002 19:07:59.100182 1417 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:07:59.099000 audit[1449]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:07:59.099000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7d5c7400 a2=0 a3=7ffd7d5c73ec items=0 ppid=1417 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:07:59.099000 audit[1450]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:59.099000 audit[1450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf8a36c50 a2=0 a3=7ffcf8a36c3c items=0 ppid=1417 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:07:59.100000 audit[1451]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:59.100000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc0e2815d0 a2=0 a3=7ffc0e2815bc items=0 ppid=1417 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:07:59.100000 audit[1452]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=1452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:07:59.100000 audit[1452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdef25fea0 a2=0 a3=7ffdef25fe8c items=0 ppid=1417 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:07:59.102039 kubelet[1417]: W1002 19:07:59.102019 1417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:07:59.102085 kubelet[1417]: E1002 19:07:59.102044 1417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:07:59.101000 audit[1453]: NETFILTER_CFG table=mangle:11 family=10 entries=1 op=nft_register_chain pid=1453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:07:59.101000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca2adba90 a2=0 a3=7ffca2adba7c items=0 ppid=1417 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:07:59.102000 audit[1454]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:07:59.102000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc07d20b80 a2=0 a3=7ffc07d20b6c items=0 ppid=1417 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:07:59.104054 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:07:59.103000 audit[1455]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:07:59.103000 audit[1455]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2e6cc510 a2=0 a3=7ffc2e6cc4fc items=0 ppid=1417 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.103000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:07:59.107138 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:07:59.116358 kubelet[1417]: I1002 19:07:59.116322 1417 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:07:59.115000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:07:59.115000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:07:59.115000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e88810 a1=c000ff5290 a2=c000e887e0 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:07:59.115000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:07:59.116636 kubelet[1417]: I1002 19:07:59.116427 1417 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:07:59.116636 kubelet[1417]: I1002 19:07:59.116603 1417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:07:59.118084 kubelet[1417]: E1002 19:07:59.117171 1417 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.46\" not found" Oct 2 19:07:59.120098 kubelet[1417]: E1002 19:07:59.120003 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec85b13bb6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 118318518, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 118318518, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.167422 kubelet[1417]: I1002 19:07:59.167380 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.46" Oct 2 19:07:59.169068 kubelet[1417]: E1002 19:07:59.169047 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.46" Oct 2 19:07:59.169198 kubelet[1417]: E1002 19:07:59.169120 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771a5df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.46 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879270367, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 167323960, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771a5df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.170415 kubelet[1417]: E1002 19:07:59.170359 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771b6f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.46 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879274745, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 167335031, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771b6f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.171229 kubelet[1417]: E1002 19:07:59.171116 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771c6dd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.46 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879278813, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 167337686, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771c6dd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.469851 kubelet[1417]: E1002 19:07:59.469668 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.46\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:07:59.570954 kubelet[1417]: I1002 19:07:59.570921 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.46" Oct 2 19:07:59.572664 kubelet[1417]: E1002 19:07:59.572641 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.46" Oct 2 19:07:59.572745 kubelet[1417]: E1002 19:07:59.572639 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771a5df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.46 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879270367, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 570864172, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771a5df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.573770 kubelet[1417]: E1002 19:07:59.573644 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771b6f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.46 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879274745, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 570879291, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771b6f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.574768 kubelet[1417]: E1002 19:07:59.574701 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46.178a5fec7771c6dd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.46", UID:"10.0.0.46", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.46 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.46"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 7, 58, 879278813, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 7, 59, 570883459, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.46"}': 'events "10.0.0.46.178a5fec7771c6dd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:07:59.786509 kubelet[1417]: W1002 19:07:59.786379 1417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:07:59.786509 kubelet[1417]: E1002 19:07:59.786421 1417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:07:59.850752 kubelet[1417]: I1002 19:07:59.850660 1417 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:07:59.857991 kubelet[1417]: E1002 19:07:59.857943 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:00.220094 kubelet[1417]: E1002 19:08:00.219976 1417 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.46" not found Oct 2 19:08:00.273791 kubelet[1417]: E1002 19:08:00.273751 1417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.46\" not found" node="10.0.0.46" Oct 2 19:08:00.374439 kubelet[1417]: I1002 19:08:00.374408 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.46" Oct 2 19:08:00.377980 kubelet[1417]: I1002 19:08:00.377954 1417 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.46" Oct 2 19:08:00.747983 kubelet[1417]: I1002 19:08:00.747944 1417 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:08:00.748505 env[1113]: time="2023-10-02T19:08:00.748463982Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:08:00.748819 kubelet[1417]: I1002 19:08:00.748748 1417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:08:00.858175 kubelet[1417]: E1002 19:08:00.858125 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:00.858390 kubelet[1417]: I1002 19:08:00.858242 1417 apiserver.go:52] "Watching apiserver" Oct 2 19:08:00.860790 kubelet[1417]: I1002 19:08:00.860766 1417 topology_manager.go:215] "Topology Admit Handler" podUID="c26f8129-1c24-4209-a8b8-5073db1c8880" podNamespace="calico-system" podName="calico-node-gv4q6" Oct 2 19:08:00.863329 kubelet[1417]: I1002 19:08:00.863299 1417 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:08:00.865429 systemd[1]: Created slice kubepods-besteffort-podc26f8129_1c24_4209_a8b8_5073db1c8880.slice. Oct 2 19:08:00.873979 kubelet[1417]: I1002 19:08:00.873960 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-lib-modules\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874118 kubelet[1417]: I1002 19:08:00.874005 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-policysync\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874118 kubelet[1417]: I1002 19:08:00.874034 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-var-run-calico\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874118 kubelet[1417]: I1002 19:08:00.874063 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-cni-net-dir\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874187 kubelet[1417]: I1002 19:08:00.874150 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klntw\" (UniqueName: \"kubernetes.io/projected/c26f8129-1c24-4209-a8b8-5073db1c8880-kube-api-access-klntw\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874212 kubelet[1417]: I1002 19:08:00.874187 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-cni-log-dir\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874238 kubelet[1417]: I1002 19:08:00.874216 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-flexvol-driver-host\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874307 kubelet[1417]: I1002 19:08:00.874284 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-xtables-lock\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874367 kubelet[1417]: I1002 19:08:00.874344 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26f8129-1c24-4209-a8b8-5073db1c8880-tigera-ca-bundle\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874392 kubelet[1417]: I1002 19:08:00.874383 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c26f8129-1c24-4209-a8b8-5073db1c8880-node-certs\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874415 kubelet[1417]: I1002 19:08:00.874406 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-var-lib-calico\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.874443 kubelet[1417]: I1002 19:08:00.874425 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c26f8129-1c24-4209-a8b8-5073db1c8880-cni-bin-dir\") pod \"calico-node-gv4q6\" (UID: \"c26f8129-1c24-4209-a8b8-5073db1c8880\") " pod="calico-system/calico-node-gv4q6" Oct 2 19:08:00.981013 kubelet[1417]: E1002 19:08:00.980982 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:00.981013 kubelet[1417]: W1002 19:08:00.981006 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:00.981194 kubelet[1417]: E1002 19:08:00.981064 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.028536 kubelet[1417]: I1002 19:08:01.028406 1417 topology_manager.go:215] "Topology Admit Handler" podUID="f231a906-87e6-422f-81d6-82f96536a03d" podNamespace="kube-system" podName="kube-proxy-n7wzf" Oct 2 19:08:01.035483 systemd[1]: Created slice kubepods-besteffort-podf231a906_87e6_422f_81d6_82f96536a03d.slice. Oct 2 19:08:01.040512 sudo[1234]: pam_unix(sudo:session): session closed for user root Oct 2 19:08:01.039000 audit[1234]: USER_END pid=1234 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:08:01.039000 audit[1234]: CRED_DISP pid=1234 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:08:01.044006 sshd[1230]: pam_unix(sshd:session): session closed for user core Oct 2 19:08:01.047000 audit[1230]: USER_END pid=1230 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:08:01.047000 audit[1230]: CRED_DISP pid=1230 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:08:01.049837 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:47128.service: Deactivated successfully. Oct 2 19:08:01.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.46:22-10.0.0.1:47128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:08:01.050804 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:08:01.055448 kubelet[1417]: E1002 19:08:01.055377 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.055448 kubelet[1417]: W1002 19:08:01.055424 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.055683 kubelet[1417]: E1002 19:08:01.055489 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.056001 systemd-logind[1094]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:08:01.057626 systemd-logind[1094]: Removed session 7. Oct 2 19:08:01.065866 kubelet[1417]: E1002 19:08:01.065519 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.065866 kubelet[1417]: W1002 19:08:01.065537 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.065866 kubelet[1417]: E1002 19:08:01.065558 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.065866 kubelet[1417]: E1002 19:08:01.065808 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.065866 kubelet[1417]: W1002 19:08:01.065816 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.065866 kubelet[1417]: E1002 19:08:01.065827 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066202 kubelet[1417]: E1002 19:08:01.065953 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066202 kubelet[1417]: W1002 19:08:01.065960 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066202 kubelet[1417]: E1002 19:08:01.065968 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066202 kubelet[1417]: E1002 19:08:01.066098 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066202 kubelet[1417]: W1002 19:08:01.066113 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066202 kubelet[1417]: E1002 19:08:01.066121 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066395 kubelet[1417]: E1002 19:08:01.066366 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066395 kubelet[1417]: W1002 19:08:01.066377 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066395 kubelet[1417]: E1002 19:08:01.066387 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066528 kubelet[1417]: E1002 19:08:01.066495 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066528 kubelet[1417]: W1002 19:08:01.066500 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066528 kubelet[1417]: E1002 19:08:01.066508 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066751 kubelet[1417]: E1002 19:08:01.066709 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066751 kubelet[1417]: W1002 19:08:01.066724 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066751 kubelet[1417]: E1002 19:08:01.066748 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.066924 kubelet[1417]: E1002 19:08:01.066906 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.066924 kubelet[1417]: W1002 19:08:01.066916 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.066924 kubelet[1417]: E1002 19:08:01.066925 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.067188 kubelet[1417]: E1002 19:08:01.067177 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.067188 kubelet[1417]: W1002 19:08:01.067185 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.067241 kubelet[1417]: E1002 19:08:01.067194 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.067409 kubelet[1417]: E1002 19:08:01.067392 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.067409 kubelet[1417]: W1002 19:08:01.067401 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.067477 kubelet[1417]: E1002 19:08:01.067412 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.067567 kubelet[1417]: E1002 19:08:01.067559 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.067604 kubelet[1417]: W1002 19:08:01.067567 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.067604 kubelet[1417]: E1002 19:08:01.067576 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.067978 kubelet[1417]: E1002 19:08:01.067949 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.068020 kubelet[1417]: W1002 19:08:01.067980 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.068048 kubelet[1417]: E1002 19:08:01.068022 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.068387 kubelet[1417]: E1002 19:08:01.068362 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.068387 kubelet[1417]: W1002 19:08:01.068378 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.068527 kubelet[1417]: E1002 19:08:01.068416 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.068784 kubelet[1417]: E1002 19:08:01.068753 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.068868 kubelet[1417]: W1002 19:08:01.068788 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.068868 kubelet[1417]: E1002 19:08:01.068827 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.069185 kubelet[1417]: E1002 19:08:01.069160 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.069185 kubelet[1417]: W1002 19:08:01.069174 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.069342 kubelet[1417]: E1002 19:08:01.069199 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.069565 kubelet[1417]: E1002 19:08:01.069541 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.069565 kubelet[1417]: W1002 19:08:01.069557 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.069692 kubelet[1417]: E1002 19:08:01.069572 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.077139 kubelet[1417]: E1002 19:08:01.077071 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.077139 kubelet[1417]: W1002 19:08:01.077112 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.077411 kubelet[1417]: E1002 19:08:01.077197 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.077411 kubelet[1417]: I1002 19:08:01.077256 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f231a906-87e6-422f-81d6-82f96536a03d-xtables-lock\") pod \"kube-proxy-n7wzf\" (UID: \"f231a906-87e6-422f-81d6-82f96536a03d\") " pod="kube-system/kube-proxy-n7wzf" Oct 2 19:08:01.077673 kubelet[1417]: E1002 19:08:01.077649 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.077673 kubelet[1417]: W1002 19:08:01.077665 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.077855 kubelet[1417]: E1002 19:08:01.077691 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.077855 kubelet[1417]: I1002 19:08:01.077850 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f231a906-87e6-422f-81d6-82f96536a03d-kube-proxy\") pod \"kube-proxy-n7wzf\" (UID: \"f231a906-87e6-422f-81d6-82f96536a03d\") " pod="kube-system/kube-proxy-n7wzf" Oct 2 19:08:01.078324 kubelet[1417]: E1002 19:08:01.078297 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.078324 kubelet[1417]: W1002 19:08:01.078319 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.078494 kubelet[1417]: E1002 19:08:01.078344 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.078895 kubelet[1417]: E1002 19:08:01.078649 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.078895 kubelet[1417]: W1002 19:08:01.078668 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.078895 kubelet[1417]: E1002 19:08:01.078699 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.079166 kubelet[1417]: E1002 19:08:01.078940 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.079166 kubelet[1417]: W1002 19:08:01.078950 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.079166 kubelet[1417]: E1002 19:08:01.078973 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.079166 kubelet[1417]: I1002 19:08:01.078995 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f231a906-87e6-422f-81d6-82f96536a03d-lib-modules\") pod \"kube-proxy-n7wzf\" (UID: \"f231a906-87e6-422f-81d6-82f96536a03d\") " pod="kube-system/kube-proxy-n7wzf" Oct 2 19:08:01.079270 kubelet[1417]: E1002 19:08:01.079239 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.079270 kubelet[1417]: W1002 19:08:01.079250 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.079346 kubelet[1417]: E1002 19:08:01.079329 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.079390 kubelet[1417]: I1002 19:08:01.079356 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbb5\" (UniqueName: \"kubernetes.io/projected/f231a906-87e6-422f-81d6-82f96536a03d-kube-api-access-brbb5\") pod \"kube-proxy-n7wzf\" (UID: \"f231a906-87e6-422f-81d6-82f96536a03d\") " pod="kube-system/kube-proxy-n7wzf" Oct 2 19:08:01.079479 kubelet[1417]: E1002 19:08:01.079458 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.079479 kubelet[1417]: W1002 19:08:01.079469 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.079545 kubelet[1417]: E1002 19:08:01.079524 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.079654 kubelet[1417]: E1002 19:08:01.079638 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.079654 kubelet[1417]: W1002 19:08:01.079650 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.079717 kubelet[1417]: E1002 19:08:01.079668 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.079861 kubelet[1417]: E1002 19:08:01.079829 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.079861 kubelet[1417]: W1002 19:08:01.079842 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.079861 kubelet[1417]: E1002 19:08:01.079857 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.080121 kubelet[1417]: E1002 19:08:01.080071 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.080121 kubelet[1417]: W1002 19:08:01.080110 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.080283 kubelet[1417]: E1002 19:08:01.080149 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.081001 kubelet[1417]: E1002 19:08:01.080953 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.081152 kubelet[1417]: W1002 19:08:01.080992 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.081152 kubelet[1417]: E1002 19:08:01.081083 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.081407 kubelet[1417]: E1002 19:08:01.081381 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.081407 kubelet[1417]: W1002 19:08:01.081395 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.081407 kubelet[1417]: E1002 19:08:01.081407 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.175093 kubelet[1417]: E1002 19:08:01.175015 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:01.176155 env[1113]: time="2023-10-02T19:08:01.176097077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv4q6,Uid:c26f8129-1c24-4209-a8b8-5073db1c8880,Namespace:calico-system,Attempt:0,}" Oct 2 19:08:01.180205 kubelet[1417]: E1002 19:08:01.180183 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.180205 kubelet[1417]: W1002 19:08:01.180199 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.180357 kubelet[1417]: E1002 19:08:01.180230 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.180454 kubelet[1417]: E1002 19:08:01.180419 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.180454 kubelet[1417]: W1002 19:08:01.180428 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.180454 kubelet[1417]: E1002 19:08:01.180442 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.180650 kubelet[1417]: E1002 19:08:01.180621 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.180650 kubelet[1417]: W1002 19:08:01.180632 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.180650 kubelet[1417]: E1002 19:08:01.180645 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.180932 kubelet[1417]: E1002 19:08:01.180893 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.180932 kubelet[1417]: W1002 19:08:01.180904 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.180932 kubelet[1417]: E1002 19:08:01.180918 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.181214 kubelet[1417]: E1002 19:08:01.181179 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.181261 kubelet[1417]: W1002 19:08:01.181211 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.181261 kubelet[1417]: E1002 19:08:01.181249 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.181527 kubelet[1417]: E1002 19:08:01.181509 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.181527 kubelet[1417]: W1002 19:08:01.181524 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.181599 kubelet[1417]: E1002 19:08:01.181581 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.181758 kubelet[1417]: E1002 19:08:01.181715 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.181838 kubelet[1417]: W1002 19:08:01.181763 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.181873 kubelet[1417]: E1002 19:08:01.181849 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.181990 kubelet[1417]: E1002 19:08:01.181977 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.181990 kubelet[1417]: W1002 19:08:01.181987 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.182056 kubelet[1417]: E1002 19:08:01.182047 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.182177 kubelet[1417]: E1002 19:08:01.182160 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.182177 kubelet[1417]: W1002 19:08:01.182173 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.182255 kubelet[1417]: E1002 19:08:01.182237 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.182340 kubelet[1417]: E1002 19:08:01.182327 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.182340 kubelet[1417]: W1002 19:08:01.182335 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.182427 kubelet[1417]: E1002 19:08:01.182346 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.182578 kubelet[1417]: E1002 19:08:01.182560 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.182578 kubelet[1417]: W1002 19:08:01.182574 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.182655 kubelet[1417]: E1002 19:08:01.182593 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.182817 kubelet[1417]: E1002 19:08:01.182797 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.182817 kubelet[1417]: W1002 19:08:01.182810 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.182911 kubelet[1417]: E1002 19:08:01.182824 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.183023 kubelet[1417]: E1002 19:08:01.183001 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.183023 kubelet[1417]: W1002 19:08:01.183014 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.183181 kubelet[1417]: E1002 19:08:01.183041 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.183373 kubelet[1417]: E1002 19:08:01.183354 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.183373 kubelet[1417]: W1002 19:08:01.183366 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.183472 kubelet[1417]: E1002 19:08:01.183417 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.183586 kubelet[1417]: E1002 19:08:01.183569 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.183586 kubelet[1417]: W1002 19:08:01.183579 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.183698 kubelet[1417]: E1002 19:08:01.183610 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.183793 kubelet[1417]: E1002 19:08:01.183774 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.183793 kubelet[1417]: W1002 19:08:01.183788 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.183949 kubelet[1417]: E1002 19:08:01.183819 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.184115 kubelet[1417]: E1002 19:08:01.184087 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.184115 kubelet[1417]: W1002 19:08:01.184098 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.184222 kubelet[1417]: E1002 19:08:01.184139 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.184390 kubelet[1417]: E1002 19:08:01.184371 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.184390 kubelet[1417]: W1002 19:08:01.184383 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.184478 kubelet[1417]: E1002 19:08:01.184405 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.186395 kubelet[1417]: E1002 19:08:01.186362 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.186395 kubelet[1417]: W1002 19:08:01.186376 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.186395 kubelet[1417]: E1002 19:08:01.186402 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.186707 kubelet[1417]: E1002 19:08:01.186679 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.186855 kubelet[1417]: W1002 19:08:01.186709 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.186935 kubelet[1417]: E1002 19:08:01.186892 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.187015 kubelet[1417]: E1002 19:08:01.186994 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.187015 kubelet[1417]: W1002 19:08:01.187011 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.187110 kubelet[1417]: E1002 19:08:01.187033 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.220235 kubelet[1417]: E1002 19:08:01.220193 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:01.220235 kubelet[1417]: W1002 19:08:01.220222 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:01.220534 kubelet[1417]: E1002 19:08:01.220290 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:01.339841 kubelet[1417]: E1002 19:08:01.339648 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:01.340390 env[1113]: time="2023-10-02T19:08:01.340347465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n7wzf,Uid:f231a906-87e6-422f-81d6-82f96536a03d,Namespace:kube-system,Attempt:0,}" Oct 2 19:08:01.859132 kubelet[1417]: E1002 19:08:01.859082 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:02.021221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299136680.mount: Deactivated successfully. Oct 2 19:08:02.030576 env[1113]: time="2023-10-02T19:08:02.030490279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.032011 env[1113]: time="2023-10-02T19:08:02.031949636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.035772 env[1113]: time="2023-10-02T19:08:02.035693417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.037148 env[1113]: time="2023-10-02T19:08:02.037115554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.038918 env[1113]: time="2023-10-02T19:08:02.038851379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.040346 env[1113]: time="2023-10-02T19:08:02.040290458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.042777 env[1113]: time="2023-10-02T19:08:02.042701650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.044180 env[1113]: time="2023-10-02T19:08:02.044150247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:02.074350 env[1113]: time="2023-10-02T19:08:02.074259021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:02.074350 env[1113]: time="2023-10-02T19:08:02.074336777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:02.074503 env[1113]: time="2023-10-02T19:08:02.074375259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:02.074793 env[1113]: time="2023-10-02T19:08:02.074721739Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a pid=1531 runtime=io.containerd.runc.v2 Oct 2 19:08:02.076293 env[1113]: time="2023-10-02T19:08:02.076236921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:02.076340 env[1113]: time="2023-10-02T19:08:02.076291753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:02.076340 env[1113]: time="2023-10-02T19:08:02.076307723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:02.076511 env[1113]: time="2023-10-02T19:08:02.076476069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddadea437a4728167683c96d6eb2487d79807de4b5d94256c9172a5abed7984b pid=1526 runtime=io.containerd.runc.v2 Oct 2 19:08:02.100818 systemd[1]: Started cri-containerd-ddadea437a4728167683c96d6eb2487d79807de4b5d94256c9172a5abed7984b.scope. Oct 2 19:08:02.102091 systemd[1]: Started cri-containerd-fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a.scope. Oct 2 19:08:02.126780 kernel: kauditd_printk_skb: 395 callbacks suppressed Oct 2 19:08:02.126887 kernel: audit: type=1400 audit(1696273682.122:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.126907 kernel: audit: type=1400 audit(1696273682.122:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.132028 kernel: audit: type=1400 audit(1696273682.122:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.132143 kernel: audit: type=1400 audit(1696273682.122:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.136769 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:08:02.136821 kernel: audit: type=1400 audit(1696273682.122:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.136844 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:08:02.136862 kernel: audit: backlog limit exceeded Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.141016 kernel: audit: type=1400 audit(1696273682.122:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.141058 kernel: audit: type=1400 audit(1696273682.122:557): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.123000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.123000 audit: BPF prog-id=64 op=LOAD Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=1526 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616465613433376134373238313637363833633936643665623234 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=1526 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616465613433376134373238313637363833633936643665623234 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit: BPF prog-id=65 op=LOAD Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.127000 audit[1549]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000240160 items=0 ppid=1526 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616465613433376134373238313637363833633936643665623234 Oct 2 19:08:02.129000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.129000 audit: BPF prog-id=66 op=LOAD Oct 2 19:08:02.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit: BPF prog-id=67 op=LOAD Oct 2 19:08:02.129000 audit[1549]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0002401a8 items=0 ppid=1526 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616465613433376134373238313637363833633936643665623234 Oct 2 19:08:02.131000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:08:02.131000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { perfmon } for pid=1549 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: AVC avc: denied { bpf } for pid=1549 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.131000 audit[1549]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0002405b8 items=0 ppid=1526 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464616465613433376134373238313637363833633936643665623234 Oct 2 19:08:02.131000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=1531 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336239313162346636316530666633646661313865343435346638 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=1531 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336239313162346636316530666633646661313865343435346638 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit: BPF prog-id=69 op=LOAD Oct 2 19:08:02.142000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c0002f15c0 items=0 ppid=1531 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336239313162346636316530666633646661313865343435346638 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.142000 audit: BPF prog-id=70 op=LOAD Oct 2 19:08:02.142000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0002f1608 items=0 ppid=1531 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336239313162346636316530666633646661313865343435346638 Oct 2 19:08:02.142000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:08:02.142000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:02.143000 audit: BPF prog-id=71 op=LOAD Oct 2 19:08:02.143000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0002f1a18 items=0 ppid=1531 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:02.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336239313162346636316530666633646661313865343435346638 Oct 2 19:08:02.162387 env[1113]: time="2023-10-02T19:08:02.161183035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv4q6,Uid:c26f8129-1c24-4209-a8b8-5073db1c8880,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\"" Oct 2 19:08:02.162387 env[1113]: time="2023-10-02T19:08:02.161425550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n7wzf,Uid:f231a906-87e6-422f-81d6-82f96536a03d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddadea437a4728167683c96d6eb2487d79807de4b5d94256c9172a5abed7984b\"" Oct 2 19:08:02.162683 kubelet[1417]: E1002 19:08:02.162646 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:02.162902 kubelet[1417]: E1002 19:08:02.162882 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:02.164448 env[1113]: time="2023-10-02T19:08:02.164412882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:08:02.859790 kubelet[1417]: E1002 19:08:02.859732 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:03.657526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015385024.mount: Deactivated successfully. Oct 2 19:08:03.860288 kubelet[1417]: E1002 19:08:03.860237 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:04.861228 kubelet[1417]: E1002 19:08:04.861178 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:05.618903 env[1113]: time="2023-10-02T19:08:05.618830077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:05.861806 kubelet[1417]: E1002 19:08:05.861730 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:05.862292 env[1113]: time="2023-10-02T19:08:05.862104494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:06.006981 env[1113]: time="2023-10-02T19:08:06.006837600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:06.058626 env[1113]: time="2023-10-02T19:08:06.058582861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:06.059124 env[1113]: time="2023-10-02T19:08:06.059104790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 19:08:06.059728 env[1113]: time="2023-10-02T19:08:06.059696800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0\"" Oct 2 19:08:06.061076 env[1113]: time="2023-10-02T19:08:06.061021484Z" level=info msg="CreateContainer within sandbox \"ddadea437a4728167683c96d6eb2487d79807de4b5d94256c9172a5abed7984b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:08:06.862774 kubelet[1417]: E1002 19:08:06.862718 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:07.089455 env[1113]: time="2023-10-02T19:08:07.089359605Z" level=info msg="CreateContainer within sandbox \"ddadea437a4728167683c96d6eb2487d79807de4b5d94256c9172a5abed7984b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9841982d5332cfc24119d7c7fda1635c88ea3f231f8cfb94060459297a0cb76c\"" Oct 2 19:08:07.090494 env[1113]: time="2023-10-02T19:08:07.090436475Z" level=info msg="StartContainer for \"9841982d5332cfc24119d7c7fda1635c88ea3f231f8cfb94060459297a0cb76c\"" Oct 2 19:08:07.110906 systemd[1]: Started cri-containerd-9841982d5332cfc24119d7c7fda1635c88ea3f231f8cfb94060459297a0cb76c.scope. Oct 2 19:08:07.125000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127227 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:08:07.127284 kernel: audit: type=1400 audit(1696273687.125:587): avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.125000 audit[1607]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1526 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.132363 kernel: audit: type=1300 audit(1696273687.125:587): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1526 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.132440 kernel: audit: type=1327 audit(1696273687.125:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343139383264353333326366633234313139643763376664613136 Oct 2 19:08:07.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343139383264353333326366633234313139643763376664613136 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.137197 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.137260 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.139000 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.142649 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.146887 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.146943 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.146970 kernel: audit: type=1400 audit(1696273687.127:588): avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.127000 audit: BPF prog-id=72 op=LOAD Oct 2 19:08:07.127000 audit[1607]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0002e1be0 items=0 ppid=1526 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343139383264353333326366633234313139643763376664613136 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.131000 audit: BPF prog-id=73 op=LOAD Oct 2 19:08:07.131000 audit[1607]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0002e1c28 items=0 ppid=1526 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343139383264353333326366633234313139643763376664613136 Oct 2 19:08:07.134000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:08:07.134000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { perfmon } for pid=1607 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit[1607]: AVC avc: denied { bpf } for pid=1607 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:07.134000 audit: BPF prog-id=74 op=LOAD Oct 2 19:08:07.134000 audit[1607]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002e1cb8 items=0 ppid=1526 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343139383264353333326366633234313139643763376664613136 Oct 2 19:08:07.208000 audit[1661]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.208000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec954e6b0 a2=0 a3=7ffec954e69c items=0 ppid=1618 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:08:07.209000 audit[1663]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.209000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec4e3e780 a2=0 a3=7ffec4e3e76c items=0 ppid=1618 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:08:07.209000 audit[1662]: NETFILTER_CFG table=mangle:16 family=10 entries=1 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.209000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff89d74390 a2=0 a3=7fff89d7437c items=0 ppid=1618 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:08:07.210000 audit[1664]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_chain pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.210000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0aa40f60 a2=0 a3=7ffe0aa40f4c items=0 ppid=1618 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:08:07.210000 audit[1665]: NETFILTER_CFG table=nat:18 family=10 entries=1 op=nft_register_chain pid=1665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.210000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdea780bb0 a2=0 a3=7ffdea780b9c items=0 ppid=1618 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:08:07.211000 audit[1666]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.211000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc635b30a0 a2=0 a3=7ffc635b308c items=0 ppid=1618 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:08:07.232038 env[1113]: time="2023-10-02T19:08:07.231964530Z" level=info msg="StartContainer for \"9841982d5332cfc24119d7c7fda1635c88ea3f231f8cfb94060459297a0cb76c\" returns successfully" Oct 2 19:08:07.310000 audit[1667]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.310000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd40727d30 a2=0 a3=7ffd40727d1c items=0 ppid=1618 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:08:07.315000 audit[1669]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.315000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff121e07b0 a2=0 a3=7fff121e079c items=0 ppid=1618 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:08:07.319000 audit[1672]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.319000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdfc7d1460 a2=0 a3=7ffdfc7d144c items=0 ppid=1618 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:08:07.320000 audit[1673]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.320000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff78681aa0 a2=0 a3=7fff78681a8c items=0 ppid=1618 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:08:07.323000 audit[1675]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.323000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc508ed10 a2=0 a3=7ffdc508ecfc items=0 ppid=1618 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:08:07.324000 audit[1676]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.324000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff920c1ec0 a2=0 a3=7fff920c1eac items=0 ppid=1618 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:08:07.328000 audit[1678]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.328000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff151196d0 a2=0 a3=7fff151196bc items=0 ppid=1618 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:08:07.331000 audit[1681]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.331000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff75175350 a2=0 a3=7fff7517533c items=0 ppid=1618 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:08:07.333000 audit[1682]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.333000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe83e8d30 a2=0 a3=7fffe83e8d1c items=0 ppid=1618 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:08:07.335000 audit[1684]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.335000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd2e2acc0 a2=0 a3=7ffdd2e2acac items=0 ppid=1618 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:08:07.336000 audit[1685]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.336000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc8e3f9b0 a2=0 a3=7ffcc8e3f99c items=0 ppid=1618 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:08:07.339000 audit[1687]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.339000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea2c26510 a2=0 a3=7ffea2c264fc items=0 ppid=1618 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:08:07.342000 audit[1690]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.342000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0243f640 a2=0 a3=7ffd0243f62c items=0 ppid=1618 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:08:07.346000 audit[1693]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.346000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd82491c10 a2=0 a3=7ffd82491bfc items=0 ppid=1618 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:08:07.348000 audit[1694]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.348000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2c1ee2f0 a2=0 a3=7ffd2c1ee2dc items=0 ppid=1618 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:08:07.350000 audit[1696]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.350000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd44c74460 a2=0 a3=7ffd44c7444c items=0 ppid=1618 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:08:07.373000 audit[1701]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.373000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd96f652e0 a2=0 a3=7ffd96f652cc items=0 ppid=1618 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:08:07.375000 audit[1702]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.375000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2c6d05a0 a2=0 a3=7fff2c6d058c items=0 ppid=1618 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:08:07.377000 audit[1704]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:08:07.377000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdefc6d570 a2=0 a3=7ffdefc6d55c items=0 ppid=1618 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:08:07.396000 audit[1710]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:07.396000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffce01e3cf0 a2=0 a3=7ffce01e3cdc items=0 ppid=1618 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:07.413000 audit[1710]: NETFILTER_CFG table=nat:40 family=2 entries=21 op=nft_register_chain pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:07.413000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffce01e3cf0 a2=0 a3=7ffce01e3cdc items=0 ppid=1618 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:07.415000 audit[1716]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.415000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc8f3c4e60 a2=0 a3=7ffc8f3c4e4c items=0 ppid=1618 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:08:07.419000 audit[1718]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.419000 audit[1718]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffef5a1f640 a2=0 a3=7ffef5a1f62c items=0 ppid=1618 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:08:07.425000 audit[1721]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.425000 audit[1721]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff261fda10 a2=0 a3=7fff261fd9fc items=0 ppid=1618 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:08:07.426000 audit[1722]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.426000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc507d92e0 a2=0 a3=7ffc507d92cc items=0 ppid=1618 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:08:07.429000 audit[1724]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.429000 audit[1724]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc31346160 a2=0 a3=7ffc3134614c items=0 ppid=1618 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:08:07.430000 audit[1725]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.430000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0f8c3ba0 a2=0 a3=7ffe0f8c3b8c items=0 ppid=1618 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:08:07.433000 audit[1727]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.433000 audit[1727]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0d0fd3f0 a2=0 a3=7ffc0d0fd3dc items=0 ppid=1618 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:08:07.436000 audit[1730]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.436000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd6cc7e130 a2=0 a3=7ffd6cc7e11c items=0 ppid=1618 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:08:07.436000 audit[1731]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1731 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.436000 audit[1731]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed2c5c640 a2=0 a3=7ffed2c5c62c items=0 ppid=1618 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:08:07.439000 audit[1733]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.439000 audit[1733]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe46ef220 a2=0 a3=7fffe46ef20c items=0 ppid=1618 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:08:07.440000 audit[1734]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.440000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe85ac8eb0 a2=0 a3=7ffe85ac8e9c items=0 ppid=1618 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:08:07.442000 audit[1736]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.442000 audit[1736]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc77b7d40 a2=0 a3=7ffcc77b7d2c items=0 ppid=1618 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.442000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:08:07.446000 audit[1739]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.446000 audit[1739]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2bb686e0 a2=0 a3=7fff2bb686cc items=0 ppid=1618 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:08:07.449000 audit[1742]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.449000 audit[1742]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2ed33bb0 a2=0 a3=7ffd2ed33b9c items=0 ppid=1618 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:08:07.451000 audit[1743]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.451000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd4ced9a10 a2=0 a3=7ffd4ced99fc items=0 ppid=1618 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:08:07.453000 audit[1745]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.453000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff7059b0b0 a2=0 a3=7fff7059b09c items=0 ppid=1618 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:08:07.456000 audit[1748]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.456000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc96086890 a2=0 a3=7ffc9608687c items=0 ppid=1618 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:08:07.458000 audit[1749]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.458000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8ea21ce0 a2=0 a3=7ffc8ea21ccc items=0 ppid=1618 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:08:07.460000 audit[1751]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.460000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffda244d680 a2=0 a3=7ffda244d66c items=0 ppid=1618 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:08:07.461000 audit[1752]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.461000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef175ffc0 a2=0 a3=7ffef175ffac items=0 ppid=1618 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:08:07.463000 audit[1754]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.463000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcee409490 a2=0 a3=7ffcee40947c items=0 ppid=1618 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:08:07.465000 audit[1757]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:08:07.465000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5b7791d0 a2=0 a3=7ffc5b7791bc items=0 ppid=1618 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:08:07.468000 audit[1759]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:08:07.468000 audit[1759]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffec7187810 a2=0 a3=7ffec71877fc items=0 ppid=1618 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.468000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:07.469000 audit[1759]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:08:07.469000 audit[1759]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffec7187810 a2=0 a3=7ffec71877fc items=0 ppid=1618 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:07.469000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:07.863530 kubelet[1417]: E1002 19:08:07.863400 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:08.122442 kubelet[1417]: E1002 19:08:08.122331 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:08.223832 kubelet[1417]: E1002 19:08:08.223788 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.223832 kubelet[1417]: W1002 19:08:08.223821 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.224019 kubelet[1417]: E1002 19:08:08.223854 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.224083 kubelet[1417]: E1002 19:08:08.224062 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.224083 kubelet[1417]: W1002 19:08:08.224076 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.224136 kubelet[1417]: E1002 19:08:08.224092 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.224303 kubelet[1417]: E1002 19:08:08.224284 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.224303 kubelet[1417]: W1002 19:08:08.224298 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.224350 kubelet[1417]: E1002 19:08:08.224312 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.224497 kubelet[1417]: E1002 19:08:08.224479 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.224497 kubelet[1417]: W1002 19:08:08.224492 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.224545 kubelet[1417]: E1002 19:08:08.224506 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.224721 kubelet[1417]: E1002 19:08:08.224706 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.224721 kubelet[1417]: W1002 19:08:08.224717 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.224799 kubelet[1417]: E1002 19:08:08.224729 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.224958 kubelet[1417]: E1002 19:08:08.224944 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.224958 kubelet[1417]: W1002 19:08:08.224955 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225023 kubelet[1417]: E1002 19:08:08.224968 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.225134 kubelet[1417]: E1002 19:08:08.225122 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.225134 kubelet[1417]: W1002 19:08:08.225132 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225182 kubelet[1417]: E1002 19:08:08.225143 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.225330 kubelet[1417]: E1002 19:08:08.225316 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.225330 kubelet[1417]: W1002 19:08:08.225326 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225389 kubelet[1417]: E1002 19:08:08.225337 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.225505 kubelet[1417]: E1002 19:08:08.225491 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.225505 kubelet[1417]: W1002 19:08:08.225502 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225588 kubelet[1417]: E1002 19:08:08.225516 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.225713 kubelet[1417]: E1002 19:08:08.225699 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.225713 kubelet[1417]: W1002 19:08:08.225710 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225790 kubelet[1417]: E1002 19:08:08.225721 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.225923 kubelet[1417]: E1002 19:08:08.225908 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.225923 kubelet[1417]: W1002 19:08:08.225919 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.225988 kubelet[1417]: E1002 19:08:08.225930 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.226126 kubelet[1417]: E1002 19:08:08.226111 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.226126 kubelet[1417]: W1002 19:08:08.226121 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.226186 kubelet[1417]: E1002 19:08:08.226133 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.226309 kubelet[1417]: E1002 19:08:08.226295 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.226309 kubelet[1417]: W1002 19:08:08.226306 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.226376 kubelet[1417]: E1002 19:08:08.226318 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.226510 kubelet[1417]: E1002 19:08:08.226495 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.226510 kubelet[1417]: W1002 19:08:08.226506 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.226589 kubelet[1417]: E1002 19:08:08.226520 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.226690 kubelet[1417]: E1002 19:08:08.226677 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.226690 kubelet[1417]: W1002 19:08:08.226687 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.226766 kubelet[1417]: E1002 19:08:08.226698 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.226921 kubelet[1417]: E1002 19:08:08.226906 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.226921 kubelet[1417]: W1002 19:08:08.226919 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.226990 kubelet[1417]: E1002 19:08:08.226931 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.227099 kubelet[1417]: E1002 19:08:08.227083 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.227099 kubelet[1417]: W1002 19:08:08.227093 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.227099 kubelet[1417]: E1002 19:08:08.227103 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.227287 kubelet[1417]: E1002 19:08:08.227261 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.227287 kubelet[1417]: W1002 19:08:08.227271 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.227287 kubelet[1417]: E1002 19:08:08.227281 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.227440 kubelet[1417]: E1002 19:08:08.227426 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.227440 kubelet[1417]: W1002 19:08:08.227438 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.227498 kubelet[1417]: E1002 19:08:08.227449 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.227622 kubelet[1417]: E1002 19:08:08.227608 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.227622 kubelet[1417]: W1002 19:08:08.227619 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.227694 kubelet[1417]: E1002 19:08:08.227632 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.231946 kubelet[1417]: E1002 19:08:08.231923 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.231946 kubelet[1417]: W1002 19:08:08.231940 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.232015 kubelet[1417]: E1002 19:08:08.231964 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.232149 kubelet[1417]: E1002 19:08:08.232136 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.232149 kubelet[1417]: W1002 19:08:08.232147 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.232220 kubelet[1417]: E1002 19:08:08.232163 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.232359 kubelet[1417]: E1002 19:08:08.232330 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.232359 kubelet[1417]: W1002 19:08:08.232341 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.232359 kubelet[1417]: E1002 19:08:08.232359 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232498 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233014 kubelet[1417]: W1002 19:08:08.232509 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232524 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232643 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233014 kubelet[1417]: W1002 19:08:08.232651 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232665 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232827 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233014 kubelet[1417]: W1002 19:08:08.232835 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233014 kubelet[1417]: E1002 19:08:08.232849 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233034 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233985 kubelet[1417]: W1002 19:08:08.233041 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233055 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233169 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233985 kubelet[1417]: W1002 19:08:08.233179 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233194 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233318 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.233985 kubelet[1417]: W1002 19:08:08.233325 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233337 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.233985 kubelet[1417]: E1002 19:08:08.233469 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.234317 kubelet[1417]: W1002 19:08:08.233477 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.234317 kubelet[1417]: E1002 19:08:08.233489 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.234317 kubelet[1417]: E1002 19:08:08.233656 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.234317 kubelet[1417]: W1002 19:08:08.233665 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.234317 kubelet[1417]: E1002 19:08:08.233677 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.234317 kubelet[1417]: E1002 19:08:08.233945 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:08.234317 kubelet[1417]: W1002 19:08:08.233953 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:08.234317 kubelet[1417]: E1002 19:08:08.233964 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:08.864519 kubelet[1417]: E1002 19:08:08.864457 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:08.883326 kubelet[1417]: I1002 19:08:08.883285 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n7wzf" podStartSLOduration=4.987414617 podCreationTimestamp="2023-10-02 19:08:00 +0000 UTC" firstStartedPulling="2023-10-02 19:08:02.163632979 +0000 UTC m=+3.665670922" lastFinishedPulling="2023-10-02 19:08:06.059438566 +0000 UTC m=+7.561476509" observedRunningTime="2023-10-02 19:08:08.882762826 +0000 UTC m=+10.384800779" watchObservedRunningTime="2023-10-02 19:08:08.883220204 +0000 UTC m=+10.385258147" Oct 2 19:08:09.123231 kubelet[1417]: E1002 19:08:09.123098 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:09.133973 kubelet[1417]: E1002 19:08:09.133932 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.133973 kubelet[1417]: W1002 19:08:09.133961 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.133973 kubelet[1417]: E1002 19:08:09.133994 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.134401 kubelet[1417]: E1002 19:08:09.134357 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.134464 kubelet[1417]: W1002 19:08:09.134395 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.134464 kubelet[1417]: E1002 19:08:09.134443 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.134779 kubelet[1417]: E1002 19:08:09.134763 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.134779 kubelet[1417]: W1002 19:08:09.134774 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.134874 kubelet[1417]: E1002 19:08:09.134787 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.134990 kubelet[1417]: E1002 19:08:09.134972 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.134990 kubelet[1417]: W1002 19:08:09.134983 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.134990 kubelet[1417]: E1002 19:08:09.134994 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.135226 kubelet[1417]: E1002 19:08:09.135209 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.135226 kubelet[1417]: W1002 19:08:09.135219 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.135226 kubelet[1417]: E1002 19:08:09.135231 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.135415 kubelet[1417]: E1002 19:08:09.135401 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.135415 kubelet[1417]: W1002 19:08:09.135411 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.135489 kubelet[1417]: E1002 19:08:09.135422 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.135606 kubelet[1417]: E1002 19:08:09.135589 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.135606 kubelet[1417]: W1002 19:08:09.135599 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.135606 kubelet[1417]: E1002 19:08:09.135610 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.135789 kubelet[1417]: E1002 19:08:09.135776 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.135789 kubelet[1417]: W1002 19:08:09.135786 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.135863 kubelet[1417]: E1002 19:08:09.135797 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.135976 kubelet[1417]: E1002 19:08:09.135963 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136014 kubelet[1417]: W1002 19:08:09.135978 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.136014 kubelet[1417]: E1002 19:08:09.135992 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.136156 kubelet[1417]: E1002 19:08:09.136139 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136156 kubelet[1417]: W1002 19:08:09.136150 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.136156 kubelet[1417]: E1002 19:08:09.136161 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.136339 kubelet[1417]: E1002 19:08:09.136325 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136339 kubelet[1417]: W1002 19:08:09.136335 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.136402 kubelet[1417]: E1002 19:08:09.136345 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.136536 kubelet[1417]: E1002 19:08:09.136519 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136536 kubelet[1417]: W1002 19:08:09.136529 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.136536 kubelet[1417]: E1002 19:08:09.136539 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.136757 kubelet[1417]: E1002 19:08:09.136732 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136757 kubelet[1417]: W1002 19:08:09.136755 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.136820 kubelet[1417]: E1002 19:08:09.136766 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.136938 kubelet[1417]: E1002 19:08:09.136926 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.136938 kubelet[1417]: W1002 19:08:09.136936 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.137009 kubelet[1417]: E1002 19:08:09.136947 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.137114 kubelet[1417]: E1002 19:08:09.137100 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.137114 kubelet[1417]: W1002 19:08:09.137111 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.137178 kubelet[1417]: E1002 19:08:09.137123 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.137332 kubelet[1417]: E1002 19:08:09.137319 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.137332 kubelet[1417]: W1002 19:08:09.137329 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.137405 kubelet[1417]: E1002 19:08:09.137343 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.137522 kubelet[1417]: E1002 19:08:09.137505 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.137522 kubelet[1417]: W1002 19:08:09.137516 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.137622 kubelet[1417]: E1002 19:08:09.137527 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.137756 kubelet[1417]: E1002 19:08:09.137722 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.137756 kubelet[1417]: W1002 19:08:09.137745 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.137833 kubelet[1417]: E1002 19:08:09.137758 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.137942 kubelet[1417]: E1002 19:08:09.137924 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.137942 kubelet[1417]: W1002 19:08:09.137936 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.138024 kubelet[1417]: E1002 19:08:09.137950 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.138137 kubelet[1417]: E1002 19:08:09.138119 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.138137 kubelet[1417]: W1002 19:08:09.138131 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.138238 kubelet[1417]: E1002 19:08:09.138149 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.139319 kubelet[1417]: E1002 19:08:09.139287 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.139319 kubelet[1417]: W1002 19:08:09.139301 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.139319 kubelet[1417]: E1002 19:08:09.139313 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.139575 kubelet[1417]: E1002 19:08:09.139546 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.139575 kubelet[1417]: W1002 19:08:09.139559 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.139575 kubelet[1417]: E1002 19:08:09.139573 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.139753 kubelet[1417]: E1002 19:08:09.139720 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.139753 kubelet[1417]: W1002 19:08:09.139732 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.139818 kubelet[1417]: E1002 19:08:09.139761 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.139915 kubelet[1417]: E1002 19:08:09.139903 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.139915 kubelet[1417]: W1002 19:08:09.139912 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.139985 kubelet[1417]: E1002 19:08:09.139932 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.140087 kubelet[1417]: E1002 19:08:09.140072 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.140087 kubelet[1417]: W1002 19:08:09.140080 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.140135 kubelet[1417]: E1002 19:08:09.140092 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.140294 kubelet[1417]: E1002 19:08:09.140280 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.140294 kubelet[1417]: W1002 19:08:09.140291 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.140341 kubelet[1417]: E1002 19:08:09.140303 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.140530 kubelet[1417]: E1002 19:08:09.140516 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.140530 kubelet[1417]: W1002 19:08:09.140528 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.140605 kubelet[1417]: E1002 19:08:09.140545 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.140696 kubelet[1417]: E1002 19:08:09.140685 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.140696 kubelet[1417]: W1002 19:08:09.140694 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.140762 kubelet[1417]: E1002 19:08:09.140705 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.140857 kubelet[1417]: E1002 19:08:09.140845 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.140857 kubelet[1417]: W1002 19:08:09.140854 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.140919 kubelet[1417]: E1002 19:08:09.140868 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.141006 kubelet[1417]: E1002 19:08:09.140994 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.141006 kubelet[1417]: W1002 19:08:09.141002 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.141075 kubelet[1417]: E1002 19:08:09.141017 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.141220 kubelet[1417]: E1002 19:08:09.141202 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.141248 kubelet[1417]: W1002 19:08:09.141219 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.141248 kubelet[1417]: E1002 19:08:09.141244 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.141453 kubelet[1417]: E1002 19:08:09.141438 1417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:08:09.141453 kubelet[1417]: W1002 19:08:09.141448 1417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:08:09.141521 kubelet[1417]: E1002 19:08:09.141462 1417 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:08:09.864701 kubelet[1417]: E1002 19:08:09.864626 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:10.865511 kubelet[1417]: E1002 19:08:10.865439 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:11.155439 env[1113]: time="2023-10-02T19:08:11.155298111Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:11.157533 env[1113]: time="2023-10-02T19:08:11.157475735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed8b7bbb113fecfcce8e15c7d7232b3fe31ed6f37b04df455f6a3f2bc8695d72,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:11.162260 env[1113]: time="2023-10-02T19:08:11.162201618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:11.164896 env[1113]: time="2023-10-02T19:08:11.164849213Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:182a323c25a3503be8c504892a12a55d99a42c3a582cb8e93a1ecc7c193a44c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:11.165962 env[1113]: time="2023-10-02T19:08:11.165930551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0\" returns image reference \"sha256:ed8b7bbb113fecfcce8e15c7d7232b3fe31ed6f37b04df455f6a3f2bc8695d72\"" Oct 2 19:08:11.168134 env[1113]: time="2023-10-02T19:08:11.168107413Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 2 19:08:11.180162 env[1113]: time="2023-10-02T19:08:11.180096627Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0\"" Oct 2 19:08:11.180760 env[1113]: time="2023-10-02T19:08:11.180716099Z" level=info msg="StartContainer for \"223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0\"" Oct 2 19:08:11.199645 systemd[1]: Started cri-containerd-223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0.scope. Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=1531 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:11.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232333134366538616530326435313965633266643366653332666431 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.211000 audit: BPF prog-id=75 op=LOAD Oct 2 19:08:11.211000 audit[1831]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bd9d8 a2=78 a3=c00026bf90 items=0 ppid=1531 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:11.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232333134366538616530326435313965633266643366653332666431 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit: BPF prog-id=76 op=LOAD Oct 2 19:08:11.212000 audit[1831]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001bd770 a2=78 a3=c00026bfd8 items=0 ppid=1531 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:11.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232333134366538616530326435313965633266643366653332666431 Oct 2 19:08:11.212000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:08:11.212000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { perfmon } for pid=1831 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit[1831]: AVC avc: denied { bpf } for pid=1831 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:11.212000 audit: BPF prog-id=77 op=LOAD Oct 2 19:08:11.212000 audit[1831]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bdc30 a2=78 a3=c000334068 items=0 ppid=1531 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:11.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232333134366538616530326435313965633266643366653332666431 Oct 2 19:08:11.227205 env[1113]: time="2023-10-02T19:08:11.227139509Z" level=info msg="StartContainer for \"223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0\" returns successfully" Oct 2 19:08:11.235109 systemd[1]: cri-containerd-223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0.scope: Deactivated successfully. Oct 2 19:08:11.238000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:08:11.273983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0-rootfs.mount: Deactivated successfully. Oct 2 19:08:11.370506 env[1113]: time="2023-10-02T19:08:11.370411275Z" level=info msg="shim disconnected" id=223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0 Oct 2 19:08:11.370506 env[1113]: time="2023-10-02T19:08:11.370501234Z" level=warning msg="cleaning up after shim disconnected" id=223146e8ae02d519ec2fd3fe32fd158ac1019aaa5b6164f75daa62dbbab620b0 namespace=k8s.io Oct 2 19:08:11.370506 env[1113]: time="2023-10-02T19:08:11.370518737Z" level=info msg="cleaning up dead shim" Oct 2 19:08:11.381472 env[1113]: time="2023-10-02T19:08:11.381401235Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:08:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1868 runtime=io.containerd.runc.v2\n" Oct 2 19:08:11.868703 kubelet[1417]: E1002 19:08:11.868632 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:12.131031 kubelet[1417]: E1002 19:08:12.130274 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:12.132568 env[1113]: time="2023-10-02T19:08:12.132279099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.25.0\"" Oct 2 19:08:12.871626 kubelet[1417]: E1002 19:08:12.871541 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:13.875487 kubelet[1417]: E1002 19:08:13.874680 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:14.878760 kubelet[1417]: E1002 19:08:14.875630 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:15.879251 kubelet[1417]: E1002 19:08:15.879008 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:16.206612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182625517.mount: Deactivated successfully. Oct 2 19:08:16.881653 kubelet[1417]: E1002 19:08:16.881582 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:17.005384 kubelet[1417]: I1002 19:08:17.005093 1417 topology_manager.go:215] "Topology Admit Handler" podUID="8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4" podNamespace="tigera-operator" podName="tigera-operator-8547bd6cc6-d8wl8" Oct 2 19:08:17.039253 systemd[1]: Created slice kubepods-besteffort-pod8f6e9ca9_b2e9_4d52_9c9e_92e73ffba2e4.slice. Oct 2 19:08:17.070693 kubelet[1417]: I1002 19:08:17.070453 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprm4\" (UniqueName: \"kubernetes.io/projected/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-kube-api-access-bprm4\") pod \"tigera-operator-8547bd6cc6-d8wl8\" (UID: \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\") " pod="tigera-operator/tigera-operator-8547bd6cc6-d8wl8" Oct 2 19:08:17.070693 kubelet[1417]: I1002 19:08:17.070532 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-var-lib-calico\") pod \"tigera-operator-8547bd6cc6-d8wl8\" (UID: \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\") " pod="tigera-operator/tigera-operator-8547bd6cc6-d8wl8" Oct 2 19:08:17.350673 env[1113]: time="2023-10-02T19:08:17.350486070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8547bd6cc6-d8wl8,Uid:8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4,Namespace:tigera-operator,Attempt:0,}" Oct 2 19:08:17.882390 kubelet[1417]: E1002 19:08:17.882324 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:17.996164 env[1113]: time="2023-10-02T19:08:17.996010669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:17.996164 env[1113]: time="2023-10-02T19:08:17.996066894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:17.996655 env[1113]: time="2023-10-02T19:08:17.996080079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:17.996898 env[1113]: time="2023-10-02T19:08:17.996818594Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a pid=1895 runtime=io.containerd.runc.v2 Oct 2 19:08:18.057088 systemd[1]: Started cri-containerd-a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a.scope. Oct 2 19:08:18.060560 systemd[1]: run-containerd-runc-k8s.io-a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a-runc.U0czII.mount: Deactivated successfully. Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113591 kernel: kauditd_printk_skb: 230 callbacks suppressed Oct 2 19:08:18.113840 kernel: audit: type=1400 audit(1696273698.100:651): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113877 kernel: audit: type=1400 audit(1696273698.100:652): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.124198 kernel: audit: type=1400 audit(1696273698.100:653): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.124537 kernel: audit: type=1400 audit(1696273698.100:654): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.124677 kernel: audit: type=1400 audit(1696273698.100:655): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.146211 kernel: audit: type=1400 audit(1696273698.100:656): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.146393 kernel: audit: type=1400 audit(1696273698.100:657): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156943 kernel: audit: type=1400 audit(1696273698.100:658): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.157134 kernel: audit: type=1400 audit(1696273698.100:659): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.100000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.101000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.168911 kernel: audit: type=1400 audit(1696273698.101:660): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.101000 audit: BPF prog-id=78 op=LOAD Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001c5c48 a2=10 a3=1c items=0 ppid=1895 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:18.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373938636261393561323032663939396531666235373163363238 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001c56b0 a2=3c a3=c items=0 ppid=1895 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:18.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373938636261393561323032663939396531666235373163363238 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.105000 audit: BPF prog-id=79 op=LOAD Oct 2 19:08:18.105000 audit[1903]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c59d8 a2=78 a3=c000110360 items=0 ppid=1895 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:18.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373938636261393561323032663939396531666235373163363238 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.113000 audit: BPF prog-id=80 op=LOAD Oct 2 19:08:18.113000 audit[1903]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001c5770 a2=78 a3=c0001103a8 items=0 ppid=1895 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:18.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373938636261393561323032663939396531666235373163363238 Oct 2 19:08:18.156000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:08:18.156000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { perfmon } for pid=1903 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit[1903]: AVC avc: denied { bpf } for pid=1903 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:18.156000 audit: BPF prog-id=81 op=LOAD Oct 2 19:08:18.156000 audit[1903]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c5c30 a2=78 a3=c0001107b8 items=0 ppid=1895 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:18.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139373938636261393561323032663939396531666235373163363238 Oct 2 19:08:18.197164 env[1113]: time="2023-10-02T19:08:18.197106546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8547bd6cc6-d8wl8,Uid:8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\"" Oct 2 19:08:18.857875 kubelet[1417]: E1002 19:08:18.857826 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:18.884068 kubelet[1417]: E1002 19:08:18.884014 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:19.884968 kubelet[1417]: E1002 19:08:19.884915 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:20.885772 kubelet[1417]: E1002 19:08:20.885706 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:21.312896 env[1113]: time="2023-10-02T19:08:21.312747625Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:21.315491 env[1113]: time="2023-10-02T19:08:21.315436006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:21.316995 env[1113]: time="2023-10-02T19:08:21.316964433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:21.318914 env[1113]: time="2023-10-02T19:08:21.318871469Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:34bf454be8cd5b9a35ab29c2479ff68a26497c2c87eb606e4bfe57c7fbeeff35,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:21.319518 env[1113]: time="2023-10-02T19:08:21.319486182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.25.0\" returns image reference \"sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7\"" Oct 2 19:08:21.320957 env[1113]: time="2023-10-02T19:08:21.320890285Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.29.0\"" Oct 2 19:08:21.321640 env[1113]: time="2023-10-02T19:08:21.321604124Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 2 19:08:21.346850 env[1113]: time="2023-10-02T19:08:21.346780859Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca\"" Oct 2 19:08:21.347679 env[1113]: time="2023-10-02T19:08:21.347643236Z" level=info msg="StartContainer for \"be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca\"" Oct 2 19:08:21.392911 systemd[1]: Started cri-containerd-be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca.scope. Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=8 items=0 ppid=1531 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:21.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393362346263653833386661383330376233636333623762656633 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit: BPF prog-id=82 op=LOAD Oct 2 19:08:21.422000 audit[1937]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c0001fe0a0 items=0 ppid=1531 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:21.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393362346263653833386661383330376233636333623762656633 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.422000 audit: BPF prog-id=83 op=LOAD Oct 2 19:08:21.422000 audit[1937]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c0001fe0e8 items=0 ppid=1531 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:21.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393362346263653833386661383330376233636333623762656633 Oct 2 19:08:21.423000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:08:21.423000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { perfmon } for pid=1937 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit[1937]: AVC avc: denied { bpf } for pid=1937 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:21.423000 audit: BPF prog-id=84 op=LOAD Oct 2 19:08:21.423000 audit[1937]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c0001fe178 items=0 ppid=1531 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:21.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265393362346263653833386661383330376233636333623762656633 Oct 2 19:08:21.444874 env[1113]: time="2023-10-02T19:08:21.444800761Z" level=info msg="StartContainer for \"be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca\" returns successfully" Oct 2 19:08:21.886434 kubelet[1417]: E1002 19:08:21.886361 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:22.184071 kubelet[1417]: E1002 19:08:22.183621 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:22.337415 systemd[1]: run-containerd-runc-k8s.io-be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca-runc.8LGJto.mount: Deactivated successfully. Oct 2 19:08:22.887400 kubelet[1417]: E1002 19:08:22.887307 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:23.185872 kubelet[1417]: E1002 19:08:23.185764 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:23.887654 kubelet[1417]: E1002 19:08:23.887588 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:24.888137 kubelet[1417]: E1002 19:08:24.888082 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:25.889077 kubelet[1417]: E1002 19:08:25.889023 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:26.889478 kubelet[1417]: E1002 19:08:26.889418 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:27.890473 kubelet[1417]: E1002 19:08:27.890407 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:28.048260 env[1113]: time="2023-10-02T19:08:28.048161481Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:08:28.050974 systemd[1]: cri-containerd-be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca.scope: Deactivated successfully. Oct 2 19:08:28.057000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:08:28.071144 kernel: kauditd_printk_skb: 90 callbacks suppressed Oct 2 19:08:28.071272 kernel: audit: type=1334 audit(1696273708.057:675): prog-id=84 op=UNLOAD Oct 2 19:08:28.082818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca-rootfs.mount: Deactivated successfully. Oct 2 19:08:28.119979 kubelet[1417]: I1002 19:08:28.119943 1417 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Oct 2 19:08:28.285994 env[1113]: time="2023-10-02T19:08:28.285628718Z" level=info msg="shim disconnected" id=be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca Oct 2 19:08:28.285994 env[1113]: time="2023-10-02T19:08:28.285690697Z" level=warning msg="cleaning up after shim disconnected" id=be93b4bce838fa8307b3cc3b7bef327c916daf4a806f1cd82b0c3c1185b03dca namespace=k8s.io Oct 2 19:08:28.285994 env[1113]: time="2023-10-02T19:08:28.285702921Z" level=info msg="cleaning up dead shim" Oct 2 19:08:28.289120 kubelet[1417]: I1002 19:08:28.289039 1417 topology_manager.go:215] "Topology Admit Handler" podUID="059f120d-41c2-40ee-916b-51ed03391c22" podNamespace="kube-system" podName="coredns-5dd5756b68-9jw66" Oct 2 19:08:28.289293 kubelet[1417]: I1002 19:08:28.289213 1417 topology_manager.go:215] "Topology Admit Handler" podUID="2b42bb8c-9ba9-4810-bfaf-54be6161be63" podNamespace="kube-system" podName="coredns-5dd5756b68-8glxb" Oct 2 19:08:28.289349 kubelet[1417]: I1002 19:08:28.289334 1417 topology_manager.go:215] "Topology Admit Handler" podUID="c37eda03-464c-4d96-9ada-29c3d253b3a0" podNamespace="calico-system" podName="calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:08:28.293481 env[1113]: time="2023-10-02T19:08:28.293415798Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:08:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1985 runtime=io.containerd.runc.v2\n" Oct 2 19:08:28.294408 systemd[1]: Created slice kubepods-burstable-pod2b42bb8c_9ba9_4810_bfaf_54be6161be63.slice. Oct 2 19:08:28.304463 systemd[1]: Created slice kubepods-besteffort-podc37eda03_464c_4d96_9ada_29c3d253b3a0.slice. Oct 2 19:08:28.308840 systemd[1]: Created slice kubepods-burstable-pod059f120d_41c2_40ee_916b_51ed03391c22.slice. Oct 2 19:08:28.319052 kubelet[1417]: I1002 19:08:28.319013 1417 topology_manager.go:215] "Topology Admit Handler" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" podNamespace="calico-system" podName="csi-node-driver-2ckzv" Oct 2 19:08:28.333132 systemd[1]: Created slice kubepods-besteffort-pod20101097_40e7_4d0a_a992_23f4379dc0f4.slice. Oct 2 19:08:28.472411 kubelet[1417]: I1002 19:08:28.472346 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9tx\" (UniqueName: \"kubernetes.io/projected/2b42bb8c-9ba9-4810-bfaf-54be6161be63-kube-api-access-2n9tx\") pod \"coredns-5dd5756b68-8glxb\" (UID: \"2b42bb8c-9ba9-4810-bfaf-54be6161be63\") " pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:08:28.472411 kubelet[1417]: I1002 19:08:28.472400 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjb7\" (UniqueName: \"kubernetes.io/projected/059f120d-41c2-40ee-916b-51ed03391c22-kube-api-access-6gjb7\") pod \"coredns-5dd5756b68-9jw66\" (UID: \"059f120d-41c2-40ee-916b-51ed03391c22\") " pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:08:28.472411 kubelet[1417]: I1002 19:08:28.472423 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9w4\" (UniqueName: \"kubernetes.io/projected/c37eda03-464c-4d96-9ada-29c3d253b3a0-kube-api-access-5b9w4\") pod \"calico-kube-controllers-74b9887bb6-bt4ql\" (UID: \"c37eda03-464c-4d96-9ada-29c3d253b3a0\") " pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:08:28.472673 kubelet[1417]: I1002 19:08:28.472551 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20101097-40e7-4d0a-a992-23f4379dc0f4-varrun\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.472817 kubelet[1417]: I1002 19:08:28.472774 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/059f120d-41c2-40ee-916b-51ed03391c22-config-volume\") pod \"coredns-5dd5756b68-9jw66\" (UID: \"059f120d-41c2-40ee-916b-51ed03391c22\") " pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:08:28.472897 kubelet[1417]: I1002 19:08:28.472835 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etccalico\" (UniqueName: \"kubernetes.io/host-path/20101097-40e7-4d0a-a992-23f4379dc0f4-etccalico\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.472897 kubelet[1417]: I1002 19:08:28.472867 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20101097-40e7-4d0a-a992-23f4379dc0f4-kubelet-dir\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.472897 kubelet[1417]: I1002 19:08:28.472886 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20101097-40e7-4d0a-a992-23f4379dc0f4-registration-dir\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.472983 kubelet[1417]: I1002 19:08:28.472911 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b42bb8c-9ba9-4810-bfaf-54be6161be63-config-volume\") pod \"coredns-5dd5756b68-8glxb\" (UID: \"2b42bb8c-9ba9-4810-bfaf-54be6161be63\") " pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:08:28.472983 kubelet[1417]: I1002 19:08:28.472936 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20101097-40e7-4d0a-a992-23f4379dc0f4-socket-dir\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.473039 kubelet[1417]: I1002 19:08:28.472990 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7j8x\" (UniqueName: \"kubernetes.io/projected/20101097-40e7-4d0a-a992-23f4379dc0f4-kube-api-access-v7j8x\") pod \"csi-node-driver-2ckzv\" (UID: \"20101097-40e7-4d0a-a992-23f4379dc0f4\") " pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:28.891530 kubelet[1417]: E1002 19:08:28.891462 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:29.197045 kubelet[1417]: E1002 19:08:29.196931 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:29.202833 kubelet[1417]: E1002 19:08:29.202807 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:29.203226 env[1113]: time="2023-10-02T19:08:29.203186217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8glxb,Uid:2b42bb8c-9ba9-4810-bfaf-54be6161be63,Namespace:kube-system,Attempt:0,}" Oct 2 19:08:29.207812 env[1113]: time="2023-10-02T19:08:29.207776794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-bt4ql,Uid:c37eda03-464c-4d96-9ada-29c3d253b3a0,Namespace:calico-system,Attempt:0,}" Oct 2 19:08:29.212213 kubelet[1417]: E1002 19:08:29.212195 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:29.218891 env[1113]: time="2023-10-02T19:08:29.218866593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jw66,Uid:059f120d-41c2-40ee-916b-51ed03391c22,Namespace:kube-system,Attempt:0,}" Oct 2 19:08:29.235271 env[1113]: time="2023-10-02T19:08:29.235247343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2ckzv,Uid:20101097-40e7-4d0a-a992-23f4379dc0f4,Namespace:calico-system,Attempt:0,}" Oct 2 19:08:29.891770 kubelet[1417]: E1002 19:08:29.891709 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:30.857313 env[1113]: time="2023-10-02T19:08:30.857207197Z" level=error msg="Failed to destroy network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.857674 env[1113]: time="2023-10-02T19:08:30.857643653Z" level=error msg="encountered an error cleaning up failed sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.857784 env[1113]: time="2023-10-02T19:08:30.857747082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2ckzv,Uid:20101097-40e7-4d0a-a992-23f4379dc0f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.858103 kubelet[1417]: E1002 19:08:30.858072 1417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.858180 kubelet[1417]: E1002 19:08:30.858169 1417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:30.858220 kubelet[1417]: E1002 19:08:30.858197 1417 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:08:30.858279 kubelet[1417]: E1002 19:08:30.858264 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:08:30.859046 env[1113]: time="2023-10-02T19:08:30.858989033Z" level=error msg="Failed to destroy network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.859393 env[1113]: time="2023-10-02T19:08:30.859354665Z" level=error msg="encountered an error cleaning up failed sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.859446 env[1113]: time="2023-10-02T19:08:30.859415992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-bt4ql,Uid:c37eda03-464c-4d96-9ada-29c3d253b3a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.859771 kubelet[1417]: E1002 19:08:30.859731 1417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.859848 kubelet[1417]: E1002 19:08:30.859813 1417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:08:30.859848 kubelet[1417]: E1002 19:08:30.859836 1417 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:08:30.859914 kubelet[1417]: E1002 19:08:30.859902 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b9887bb6-bt4ql_calico-system(c37eda03-464c-4d96-9ada-29c3d253b3a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b9887bb6-bt4ql_calico-system(c37eda03-464c-4d96-9ada-29c3d253b3a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" podUID="c37eda03-464c-4d96-9ada-29c3d253b3a0" Oct 2 19:08:30.860955 env[1113]: time="2023-10-02T19:08:30.860895189Z" level=error msg="Failed to destroy network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.861256 env[1113]: time="2023-10-02T19:08:30.861215504Z" level=error msg="encountered an error cleaning up failed sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.861321 env[1113]: time="2023-10-02T19:08:30.861279446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8glxb,Uid:2b42bb8c-9ba9-4810-bfaf-54be6161be63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.861586 kubelet[1417]: E1002 19:08:30.861563 1417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.861646 kubelet[1417]: E1002 19:08:30.861628 1417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:08:30.861680 kubelet[1417]: E1002 19:08:30.861656 1417 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:08:30.861763 kubelet[1417]: E1002 19:08:30.861726 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-8glxb_kube-system(2b42bb8c-9ba9-4810-bfaf-54be6161be63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-8glxb_kube-system(2b42bb8c-9ba9-4810-bfaf-54be6161be63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8glxb" podUID="2b42bb8c-9ba9-4810-bfaf-54be6161be63" Oct 2 19:08:30.877487 env[1113]: time="2023-10-02T19:08:30.877418876Z" level=error msg="Failed to destroy network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.877780 env[1113]: time="2023-10-02T19:08:30.877754861Z" level=error msg="encountered an error cleaning up failed sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.877826 env[1113]: time="2023-10-02T19:08:30.877803835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jw66,Uid:059f120d-41c2-40ee-916b-51ed03391c22,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.878051 kubelet[1417]: E1002 19:08:30.878029 1417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:30.878146 kubelet[1417]: E1002 19:08:30.878090 1417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:08:30.878146 kubelet[1417]: E1002 19:08:30.878120 1417 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:08:30.878221 kubelet[1417]: E1002 19:08:30.878204 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-9jw66_kube-system(059f120d-41c2-40ee-916b-51ed03391c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-9jw66_kube-system(059f120d-41c2-40ee-916b-51ed03391c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9jw66" podUID="059f120d-41c2-40ee-916b-51ed03391c22" Oct 2 19:08:30.892094 kubelet[1417]: E1002 19:08:30.892045 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:31.200574 kubelet[1417]: I1002 19:08:31.200445 1417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:31.202979 kubelet[1417]: I1002 19:08:31.202954 1417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:31.203876 env[1113]: time="2023-10-02T19:08:31.203460817Z" level=info msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" Oct 2 19:08:31.204381 kubelet[1417]: I1002 19:08:31.204309 1417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:31.204732 env[1113]: time="2023-10-02T19:08:31.204706804Z" level=info msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" Oct 2 19:08:31.205488 env[1113]: time="2023-10-02T19:08:31.205444306Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:08:31.206359 kubelet[1417]: I1002 19:08:31.206333 1417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:31.207267 env[1113]: time="2023-10-02T19:08:31.207234715Z" level=info msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" Oct 2 19:08:31.230689 env[1113]: time="2023-10-02T19:08:31.230617365Z" level=error msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" failed" error="failed to destroy network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:31.230980 kubelet[1417]: E1002 19:08:31.230953 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:31.231063 kubelet[1417]: E1002 19:08:31.231042 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d"} Oct 2 19:08:31.231092 kubelet[1417]: E1002 19:08:31.231080 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20101097-40e7-4d0a-a992-23f4379dc0f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:31.231164 kubelet[1417]: E1002 19:08:31.231114 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20101097-40e7-4d0a-a992-23f4379dc0f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:08:31.232801 env[1113]: time="2023-10-02T19:08:31.232750470Z" level=error msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" failed" error="failed to destroy network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:31.233564 kubelet[1417]: E1002 19:08:31.233409 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:31.233564 kubelet[1417]: E1002 19:08:31.233453 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d"} Oct 2 19:08:31.233564 kubelet[1417]: E1002 19:08:31.233504 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b42bb8c-9ba9-4810-bfaf-54be6161be63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:31.233564 kubelet[1417]: E1002 19:08:31.233539 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b42bb8c-9ba9-4810-bfaf-54be6161be63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8glxb" podUID="2b42bb8c-9ba9-4810-bfaf-54be6161be63" Oct 2 19:08:31.234860 env[1113]: time="2023-10-02T19:08:31.234786248Z" level=error msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" failed" error="failed to destroy network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:31.235136 kubelet[1417]: E1002 19:08:31.235093 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:31.235332 kubelet[1417]: E1002 19:08:31.235168 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a"} Oct 2 19:08:31.235332 kubelet[1417]: E1002 19:08:31.235211 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c37eda03-464c-4d96-9ada-29c3d253b3a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:31.235332 kubelet[1417]: E1002 19:08:31.235250 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c37eda03-464c-4d96-9ada-29c3d253b3a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" podUID="c37eda03-464c-4d96-9ada-29c3d253b3a0" Oct 2 19:08:31.248766 env[1113]: time="2023-10-02T19:08:31.248697039Z" level=error msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" failed" error="failed to destroy network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:31.249007 kubelet[1417]: E1002 19:08:31.248983 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:31.249054 kubelet[1417]: E1002 19:08:31.249029 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330"} Oct 2 19:08:31.249076 kubelet[1417]: E1002 19:08:31.249063 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"059f120d-41c2-40ee-916b-51ed03391c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:31.249151 kubelet[1417]: E1002 19:08:31.249104 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"059f120d-41c2-40ee-916b-51ed03391c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9jw66" podUID="059f120d-41c2-40ee-916b-51ed03391c22" Oct 2 19:08:31.566164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330-shm.mount: Deactivated successfully. Oct 2 19:08:31.566261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a-shm.mount: Deactivated successfully. Oct 2 19:08:31.566309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d-shm.mount: Deactivated successfully. Oct 2 19:08:31.892709 kubelet[1417]: E1002 19:08:31.892656 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:32.864203 env[1113]: time="2023-10-02T19:08:32.864117604Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.29.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:32.893117 kubelet[1417]: E1002 19:08:32.893058 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:32.994176 env[1113]: time="2023-10-02T19:08:32.994094178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:343ea4f89a32c8f197173c5d9f1ad64eb033df452c5b89a65877d8d3cfa692b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:33.050040 env[1113]: time="2023-10-02T19:08:33.049965553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.29.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:33.146654 env[1113]: time="2023-10-02T19:08:33.146505915Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:33.147271 env[1113]: time="2023-10-02T19:08:33.147240038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.29.0\" returns image reference \"sha256:343ea4f89a32c8f197173c5d9f1ad64eb033df452c5b89a65877d8d3cfa692b1\"" Oct 2 19:08:33.148169 env[1113]: time="2023-10-02T19:08:33.148138153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.25.0\"" Oct 2 19:08:33.149159 env[1113]: time="2023-10-02T19:08:33.149116722Z" level=info msg="CreateContainer within sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 2 19:08:33.190862 kubelet[1417]: I1002 19:08:33.190807 1417 topology_manager.go:215] "Topology Admit Handler" podUID="486964bf-aef1-40b3-8363-5586f9f415ec" podNamespace="default" podName="nginx-deployment-6d5f899847-54ds6" Oct 2 19:08:33.207786 systemd[1]: Created slice kubepods-besteffort-pod486964bf_aef1_40b3_8363_5586f9f415ec.slice. Oct 2 19:08:33.261422 env[1113]: time="2023-10-02T19:08:33.261358537Z" level=info msg="CreateContainer within sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\"" Oct 2 19:08:33.262084 env[1113]: time="2023-10-02T19:08:33.262042293Z" level=info msg="StartContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\"" Oct 2 19:08:33.293957 systemd[1]: Started cri-containerd-7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da.scope. Oct 2 19:08:33.302312 kubelet[1417]: I1002 19:08:33.302109 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kdlj\" (UniqueName: \"kubernetes.io/projected/486964bf-aef1-40b3-8363-5586f9f415ec-kube-api-access-5kdlj\") pod \"nginx-deployment-6d5f899847-54ds6\" (UID: \"486964bf-aef1-40b3-8363-5586f9f415ec\") " pod="default/nginx-deployment-6d5f899847-54ds6" Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324015 kernel: audit: type=1400 audit(1696273713.317:676): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324095 kernel: audit: type=1400 audit(1696273713.317:677): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324133 kernel: audit: type=1400 audit(1696273713.317:678): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.327960 kernel: audit: type=1400 audit(1696273713.317:679): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.328129 kernel: audit: type=1400 audit(1696273713.317:680): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.330103 kernel: audit: type=1400 audit(1696273713.317:681): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.334388 kernel: audit: type=1400 audit(1696273713.317:682): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.336644 kernel: audit: type=1400 audit(1696273713.317:683): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.317000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.344852 kernel: audit: type=1400 audit(1696273713.317:684): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.345013 kernel: audit: type=1400 audit(1696273713.318:685): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit: BPF prog-id=85 op=LOAD Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=1895 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:33.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343037326139393762616533656631343965353936643736376134 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1895 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:33.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343037326139393762616533656631343965353936643736376134 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.318000 audit: BPF prog-id=86 op=LOAD Oct 2 19:08:33.318000 audit[2257]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c0002891e0 items=0 ppid=1895 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:33.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343037326139393762616533656631343965353936643736376134 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.323000 audit: BPF prog-id=87 op=LOAD Oct 2 19:08:33.323000 audit[2257]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c000289228 items=0 ppid=1895 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:33.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343037326139393762616533656631343965353936643736376134 Oct 2 19:08:33.324000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:08:33.324000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { perfmon } for pid=2257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit[2257]: AVC avc: denied { bpf } for pid=2257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:33.324000 audit: BPF prog-id=88 op=LOAD Oct 2 19:08:33.324000 audit[2257]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c000289638 items=0 ppid=1895 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:33.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343037326139393762616533656631343965353936643736376134 Oct 2 19:08:33.358098 env[1113]: time="2023-10-02T19:08:33.358011621Z" level=info msg="StartContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" returns successfully" Oct 2 19:08:33.512351 env[1113]: time="2023-10-02T19:08:33.512202220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-54ds6,Uid:486964bf-aef1-40b3-8363-5586f9f415ec,Namespace:default,Attempt:0,}" Oct 2 19:08:33.893788 kubelet[1417]: E1002 19:08:33.893693 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:34.219777 kubelet[1417]: I1002 19:08:34.219656 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8547bd6cc6-d8wl8" podStartSLOduration=3.270738014 podCreationTimestamp="2023-10-02 19:08:16 +0000 UTC" firstStartedPulling="2023-10-02 19:08:18.198703981 +0000 UTC m=+19.700741924" lastFinishedPulling="2023-10-02 19:08:33.14758131 +0000 UTC m=+34.649619243" observedRunningTime="2023-10-02 19:08:34.21930457 +0000 UTC m=+35.721342513" watchObservedRunningTime="2023-10-02 19:08:34.219615333 +0000 UTC m=+35.721653276" Oct 2 19:08:34.894076 kubelet[1417]: E1002 19:08:34.894004 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:35.319670 env[1113]: time="2023-10-02T19:08:35.319487893Z" level=error msg="Failed to destroy network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:35.320036 env[1113]: time="2023-10-02T19:08:35.319933592Z" level=error msg="encountered an error cleaning up failed sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:35.320036 env[1113]: time="2023-10-02T19:08:35.319993607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-54ds6,Uid:486964bf-aef1-40b3-8363-5586f9f415ec,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:35.320316 kubelet[1417]: E1002 19:08:35.320267 1417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:35.320613 kubelet[1417]: E1002 19:08:35.320333 1417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-54ds6" Oct 2 19:08:35.320613 kubelet[1417]: E1002 19:08:35.320359 1417 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-54ds6" Oct 2 19:08:35.320613 kubelet[1417]: E1002 19:08:35.320412 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-54ds6_default(486964bf-aef1-40b3-8363-5586f9f415ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-54ds6_default(486964bf-aef1-40b3-8363-5586f9f415ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-54ds6" podUID="486964bf-aef1-40b3-8363-5586f9f415ec" Oct 2 19:08:35.321420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb-shm.mount: Deactivated successfully. Oct 2 19:08:35.894835 kubelet[1417]: E1002 19:08:35.894779 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:36.217769 kubelet[1417]: I1002 19:08:36.217633 1417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:36.218234 env[1113]: time="2023-10-02T19:08:36.218204531Z" level=info msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" Oct 2 19:08:36.240259 env[1113]: time="2023-10-02T19:08:36.240154036Z" level=error msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" failed" error="failed to destroy network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:36.240561 kubelet[1417]: E1002 19:08:36.240532 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:36.240629 kubelet[1417]: E1002 19:08:36.240584 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb"} Oct 2 19:08:36.240629 kubelet[1417]: E1002 19:08:36.240619 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"486964bf-aef1-40b3-8363-5586f9f415ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:36.240719 kubelet[1417]: E1002 19:08:36.240653 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"486964bf-aef1-40b3-8363-5586f9f415ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-54ds6" podUID="486964bf-aef1-40b3-8363-5586f9f415ec" Oct 2 19:08:36.895733 kubelet[1417]: E1002 19:08:36.895651 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:37.896426 kubelet[1417]: E1002 19:08:37.896362 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:38.858093 kubelet[1417]: E1002 19:08:38.858038 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:38.897317 kubelet[1417]: E1002 19:08:38.897257 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:39.551207 update_engine[1098]: I1002 19:08:39.551136 1098 update_attempter.cc:505] Updating boot flags... Oct 2 19:08:39.898374 kubelet[1417]: E1002 19:08:39.898325 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:40.898964 kubelet[1417]: E1002 19:08:40.898918 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:41.378909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835248730.mount: Deactivated successfully. Oct 2 19:08:41.750575 env[1113]: time="2023-10-02T19:08:41.750412893Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:41.753300 env[1113]: time="2023-10-02T19:08:41.753199236Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:41.754719 env[1113]: time="2023-10-02T19:08:41.754670084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:41.756788 env[1113]: time="2023-10-02T19:08:41.756690096Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e898f4b7b55c908c88dad008ae939024e71ed93c5effbb10cca891b658b2f001,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:41.757247 env[1113]: time="2023-10-02T19:08:41.757200423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.25.0\" returns image reference \"sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e\"" Oct 2 19:08:41.761220 env[1113]: time="2023-10-02T19:08:41.760205571Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 2 19:08:41.783540 env[1113]: time="2023-10-02T19:08:41.783464846Z" level=info msg="CreateContainer within sandbox \"fa3b911b4f61e0ff3dfa18e4454f811ea41616e37725d04860abfbf4662c3d8a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94\"" Oct 2 19:08:41.784248 env[1113]: time="2023-10-02T19:08:41.784205169Z" level=info msg="StartContainer for \"b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94\"" Oct 2 19:08:41.832305 systemd[1]: Started cri-containerd-b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94.scope. Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883651 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 2 19:08:41.883770 kernel: audit: type=1400 audit(1696273721.873:694): avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1531 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:41.889010 kernel: audit: type=1300 audit(1696273721.873:694): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1531 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:41.889067 kernel: audit: type=1327 audit(1696273721.873:694): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633936333235393563623237373535353639376630306438333966 Oct 2 19:08:41.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633936333235393563623237373535353639376630306438333966 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.893867 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.893940 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.897914 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.898017 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.899271 kubelet[1417]: E1002 19:08:41.899236 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.902385 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.902445 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.905584 kernel: audit: type=1400 audit(1696273721.873:695): avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.873000 audit: BPF prog-id=89 op=LOAD Oct 2 19:08:41.873000 audit[2372]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c000293660 items=0 ppid=1531 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:41.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633936333235393563623237373535353639376630306438333966 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.883000 audit: BPF prog-id=90 op=LOAD Oct 2 19:08:41.883000 audit[2372]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0002936a8 items=0 ppid=1531 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:41.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633936333235393563623237373535353639376630306438333966 Oct 2 19:08:41.885000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:08:41.885000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { perfmon } for pid=2372 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit[2372]: AVC avc: denied { bpf } for pid=2372 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:41.885000 audit: BPF prog-id=91 op=LOAD Oct 2 19:08:41.885000 audit[2372]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c000293738 items=0 ppid=1531 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:41.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633936333235393563623237373535353639376630306438333966 Oct 2 19:08:42.108460 env[1113]: time="2023-10-02T19:08:42.108387665Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:08:42.145831 env[1113]: time="2023-10-02T19:08:42.145766503Z" level=error msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" failed" error="failed to destroy network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:08:42.146123 kubelet[1417]: E1002 19:08:42.146098 1417 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:42.146195 kubelet[1417]: E1002 19:08:42.146161 1417 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330"} Oct 2 19:08:42.146222 kubelet[1417]: E1002 19:08:42.146198 1417 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"059f120d-41c2-40ee-916b-51ed03391c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:08:42.146286 kubelet[1417]: E1002 19:08:42.146227 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"059f120d-41c2-40ee-916b-51ed03391c22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9jw66" podUID="059f120d-41c2-40ee-916b-51ed03391c22" Oct 2 19:08:42.166801 env[1113]: time="2023-10-02T19:08:42.166722613Z" level=info msg="StartContainer for \"b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94\" returns successfully" Oct 2 19:08:42.208477 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 2 19:08:42.208651 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 2 19:08:42.232373 kubelet[1417]: E1002 19:08:42.232339 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:42.245318 kubelet[1417]: I1002 19:08:42.245260 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gv4q6" podStartSLOduration=2.652445422 podCreationTimestamp="2023-10-02 19:08:00 +0000 UTC" firstStartedPulling="2023-10-02 19:08:02.164834783 +0000 UTC m=+3.666872726" lastFinishedPulling="2023-10-02 19:08:41.757592376 +0000 UTC m=+43.259630319" observedRunningTime="2023-10-02 19:08:42.244697347 +0000 UTC m=+43.746735330" watchObservedRunningTime="2023-10-02 19:08:42.245203015 +0000 UTC m=+43.747240958" Oct 2 19:08:42.899626 kubelet[1417]: E1002 19:08:42.899504 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:43.235840 kubelet[1417]: E1002 19:08:43.235251 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:43.562000 audit[2531]: AVC avc: denied { write } for pid=2531 comm="tee" name="fd" dev="proc" ino=20136 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.562000 audit[2531]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3d1c3994 a2=241 a3=1b6 items=1 ppid=2514 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.562000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 2 19:08:43.562000 audit: PATH item=0 name="/dev/fd/63" inode=20133 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.583000 audit[2564]: AVC avc: denied { write } for pid=2564 comm="tee" name="fd" dev="proc" ino=19115 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.584000 audit[2550]: AVC avc: denied { write } for pid=2550 comm="tee" name="fd" dev="proc" ino=21828 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.584000 audit[2550]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff7e2b3993 a2=241 a3=1b6 items=1 ppid=2525 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.584000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 2 19:08:43.584000 audit: PATH item=0 name="/dev/fd/63" inode=21820 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.583000 audit[2564]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdaec48995 a2=241 a3=1b6 items=1 ppid=2521 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.583000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 2 19:08:43.583000 audit: PATH item=0 name="/dev/fd/63" inode=19109 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.590000 audit[2571]: AVC avc: denied { write } for pid=2571 comm="tee" name="fd" dev="proc" ino=21005 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.592000 audit[2578]: AVC avc: denied { write } for pid=2578 comm="tee" name="fd" dev="proc" ino=21008 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.593000 audit[2581]: AVC avc: denied { write } for pid=2581 comm="tee" name="fd" dev="proc" ino=21011 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.590000 audit[2571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea9470983 a2=241 a3=1b6 items=1 ppid=2512 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.590000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 2 19:08:43.590000 audit: PATH item=0 name="/dev/fd/63" inode=20996 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.593000 audit[2581]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe16693993 a2=241 a3=1b6 items=1 ppid=2520 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.593000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 2 19:08:43.593000 audit: PATH item=0 name="/dev/fd/63" inode=21002 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.592000 audit[2578]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd9ff39984 a2=241 a3=1b6 items=1 ppid=2513 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.592000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 2 19:08:43.592000 audit: PATH item=0 name="/dev/fd/63" inode=20999 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.620000 audit[2586]: AVC avc: denied { write } for pid=2586 comm="tee" name="fd" dev="proc" ino=21835 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:08:43.620000 audit[2586]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe6eb8c993 a2=241 a3=1b6 items=1 ppid=2533 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.620000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 2 19:08:43.620000 audit: PATH item=0 name="/dev/fd/63" inode=21832 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:08:43.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:08:43.816769 kernel: Initializing XFRM netlink socket Oct 2 19:08:43.900685 kubelet[1417]: E1002 19:08:43.900642 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.912000 audit: BPF prog-id=92 op=LOAD Oct 2 19:08:43.912000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc05efb200 a2=70 a3=7f1348ff6000 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.912000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit: BPF prog-id=93 op=LOAD Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc05efb200 a2=70 a3=6e items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc05efb1b0 a2=70 a3=7ffc05efb200 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit: BPF prog-id=94 op=LOAD Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc05efb190 a2=70 a3=7ffc05efb200 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc05efb270 a2=70 a3=0 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc05efb260 a2=70 a3=0 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.913000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.913000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc05efb2a0 a2=70 a3=0 items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { perfmon } for pid=2661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit[2661]: AVC avc: denied { bpf } for pid=2661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.914000 audit: BPF prog-id=95 op=LOAD Oct 2 19:08:43.914000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc05efb1c0 a2=70 a3=ffffffff items=0 ppid=2534 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.914000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:08:43.916000 audit[2665]: AVC avc: denied { bpf } for pid=2665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.916000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1cf0f680 a2=70 a3=208 items=0 ppid=2534 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 2 19:08:43.916000 audit[2665]: AVC avc: denied { bpf } for pid=2665 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:43.916000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1cf0f550 a2=70 a3=3 items=0 ppid=2534 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 2 19:08:43.926000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:08:43.920619 systemd-networkd[1020]: calico_tmp_B: Failed to manage SR-IOV PF and VF ports, ignoring: Invalid argument Oct 2 19:08:43.983000 audit[2685]: NETFILTER_CFG table=raw:65 family=2 entries=19 op=nft_register_chain pid=2685 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:43.983000 audit[2685]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffcc3df7580 a2=0 a3=7ffcc3df756c items=0 ppid=2534 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.983000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:43.984000 audit[2686]: NETFILTER_CFG table=nat:66 family=2 entries=16 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:43.984000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffdd75ab910 a2=0 a3=5602ae176000 items=0 ppid=2534 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:43.985000 audit[2692]: NETFILTER_CFG table=mangle:67 family=2 entries=19 op=nft_register_chain pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:43.985000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffdfbbbee90 a2=0 a3=7ffdfbbbee7c items=0 ppid=2534 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.985000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:43.988000 audit[2689]: NETFILTER_CFG table=filter:68 family=2 entries=39 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:43.988000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffe2a0b5eb0 a2=0 a3=560676471000 items=0 ppid=2534 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:43.988000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:44.829905 systemd-networkd[1020]: vxlan.calico: Link UP Oct 2 19:08:44.829912 systemd-networkd[1020]: vxlan.calico: Gained carrier Oct 2 19:08:44.901688 kubelet[1417]: E1002 19:08:44.901639 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:45.101592 env[1113]: time="2023-10-02T19:08:45.101466994Z" level=info msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.407 [INFO][2724] k8s.go 576: Cleaning up netns ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.407 [INFO][2724] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" iface="eth0" netns="/var/run/netns/cni-431e62d5-2b14-397f-d3c7-eca4e23db651" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.408 [INFO][2724] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" iface="eth0" netns="/var/run/netns/cni-431e62d5-2b14-397f-d3c7-eca4e23db651" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.408 [INFO][2724] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" iface="eth0" netns="/var/run/netns/cni-431e62d5-2b14-397f-d3c7-eca4e23db651" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.408 [INFO][2724] k8s.go 583: Releasing IP address(es) ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.408 [INFO][2724] utils.go 196: Calico CNI releasing IP address ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.452 [INFO][2731] ipam_plugin.go 416: Releasing address using handleID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:45.464357 env[1113]: time="2023-10-02T19:08:45Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:45.464357 env[1113]: time="2023-10-02T19:08:45Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.460 [WARNING][2731] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.460 [INFO][2731] ipam_plugin.go 444: Releasing address using workloadID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:45.464357 env[1113]: time="2023-10-02T19:08:45Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:45.464357 env[1113]: 2023-10-02 19:08:45.463 [INFO][2724] k8s.go 589: Teardown processing complete. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:08:45.464357 env[1113]: time="2023-10-02T19:08:45.464330795Z" level=info msg="TearDown network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" successfully" Oct 2 19:08:45.464890 env[1113]: time="2023-10-02T19:08:45.464364669Z" level=info msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" returns successfully" Oct 2 19:08:45.465338 env[1113]: time="2023-10-02T19:08:45.465294548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-bt4ql,Uid:c37eda03-464c-4d96-9ada-29c3d253b3a0,Namespace:calico-system,Attempt:1,}" Oct 2 19:08:45.465593 systemd[1]: run-netns-cni\x2d431e62d5\x2d2b14\x2d397f\x2dd3c7\x2deca4e23db651.mount: Deactivated successfully. Oct 2 19:08:45.902227 kubelet[1417]: E1002 19:08:45.902171 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:45.959983 systemd-networkd[1020]: vxlan.calico: Gained IPv6LL Oct 2 19:08:46.102473 env[1113]: time="2023-10-02T19:08:46.102264036Z" level=info msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" Oct 2 19:08:46.102473 env[1113]: time="2023-10-02T19:08:46.102309081Z" level=info msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.241 [INFO][2789] k8s.go 576: Cleaning up netns ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.241 [INFO][2789] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" iface="eth0" netns="/var/run/netns/cni-38e402cc-88b6-c619-ff4f-2e49e3d3fca9" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.242 [INFO][2789] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" iface="eth0" netns="/var/run/netns/cni-38e402cc-88b6-c619-ff4f-2e49e3d3fca9" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.242 [INFO][2789] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" iface="eth0" netns="/var/run/netns/cni-38e402cc-88b6-c619-ff4f-2e49e3d3fca9" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.242 [INFO][2789] k8s.go 583: Releasing IP address(es) ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.242 [INFO][2789] utils.go 196: Calico CNI releasing IP address ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.275 [INFO][2800] ipam_plugin.go 416: Releasing address using handleID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.294109 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:46.294109 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.286 [WARNING][2800] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.286 [INFO][2800] ipam_plugin.go 444: Releasing address using workloadID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.294109 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:46.294109 env[1113]: 2023-10-02 19:08:46.290 [INFO][2789] k8s.go 589: Teardown processing complete. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:46.293285 systemd[1]: run-netns-cni\x2d38e402cc\x2d88b6\x2dc619\x2dff4f\x2d2e49e3d3fca9.mount: Deactivated successfully. Oct 2 19:08:46.299930 env[1113]: time="2023-10-02T19:08:46.299882003Z" level=info msg="TearDown network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" successfully" Oct 2 19:08:46.300059 env[1113]: time="2023-10-02T19:08:46.300036936Z" level=info msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" returns successfully" Oct 2 19:08:46.300606 kubelet[1417]: E1002 19:08:46.300580 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:46.301771 env[1113]: time="2023-10-02T19:08:46.301745847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8glxb,Uid:2b42bb8c-9ba9-4810-bfaf-54be6161be63,Namespace:kube-system,Attempt:1,}" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.241 [INFO][2780] k8s.go 576: Cleaning up netns ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.241 [INFO][2780] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" iface="eth0" netns="/var/run/netns/cni-aa36a3cc-9332-abe1-a788-29fa4cfddb3a" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.242 [INFO][2780] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" iface="eth0" netns="/var/run/netns/cni-aa36a3cc-9332-abe1-a788-29fa4cfddb3a" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.242 [INFO][2780] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" iface="eth0" netns="/var/run/netns/cni-aa36a3cc-9332-abe1-a788-29fa4cfddb3a" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.242 [INFO][2780] k8s.go 583: Releasing IP address(es) ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.242 [INFO][2780] utils.go 196: Calico CNI releasing IP address ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.281 [INFO][2801] ipam_plugin.go 416: Releasing address using handleID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.302585 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:46.302585 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.296 [WARNING][2801] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.296 [INFO][2801] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.302585 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:46.302585 env[1113]: 2023-10-02 19:08:46.301 [INFO][2780] k8s.go 589: Teardown processing complete. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:08:46.303875 env[1113]: time="2023-10-02T19:08:46.303842741Z" level=info msg="TearDown network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" successfully" Oct 2 19:08:46.303993 env[1113]: time="2023-10-02T19:08:46.303967026Z" level=info msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" returns successfully" Oct 2 19:08:46.304226 systemd[1]: run-netns-cni\x2daa36a3cc\x2d9332\x2dabe1\x2da788\x2d29fa4cfddb3a.mount: Deactivated successfully. Oct 2 19:08:46.305576 env[1113]: time="2023-10-02T19:08:46.305546241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2ckzv,Uid:20101097-40e7-4d0a-a992-23f4379dc0f4,Namespace:calico-system,Attempt:1,}" Oct 2 19:08:46.419777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:08:46.419952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidad1d0801c7: link becomes ready Oct 2 19:08:46.436867 systemd-networkd[1020]: calidad1d0801c7: Link UP Oct 2 19:08:46.437079 systemd-networkd[1020]: calidad1d0801c7: Gained carrier Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.331 [INFO][2810] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0 calico-kube-controllers-74b9887bb6- calico-system c37eda03-464c-4d96-9ada-29c3d253b3a0 968 0 2023-10-02 19:07:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b9887bb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.46 calico-kube-controllers-74b9887bb6-bt4ql eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidad1d0801c7 [] []}} ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.331 [INFO][2810] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.362 [INFO][2842] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" HandleID="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.386 [INFO][2842] ipam_plugin.go 269: Auto assigning IP ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" HandleID="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004d160), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.46", "pod":"calico-kube-controllers-74b9887bb6-bt4ql", "timestamp":"2023-10-02 19:08:46.362408517 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:46.446565 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:46.446565 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.387 [INFO][2842] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.389 [INFO][2842] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.394 [INFO][2842] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.399 [INFO][2842] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.401 [INFO][2842] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.403 [INFO][2842] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.403 [INFO][2842] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.405 [INFO][2842] ipam.go 1682: Creating new handle: k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24 Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.409 [INFO][2842] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.414 [INFO][2842] ipam.go 1216: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.414 [INFO][2842] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" host="10.0.0.46" Oct 2 19:08:46.446565 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:46.446565 env[1113]: 2023-10-02 19:08:46.414 [INFO][2842] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" HandleID="k8s-pod-network.12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.416 [INFO][2810] k8s.go 383: Populated endpoint ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"c37eda03-464c-4d96-9ada-29c3d253b3a0", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"calico-kube-controllers-74b9887bb6-bt4ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidad1d0801c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.416 [INFO][2810] k8s.go 384: Calico CNI using IPs: [192.168.106.129/32] ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.416 [INFO][2810] dataplane_linux.go 68: Setting the host side veth name to calidad1d0801c7 ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.420 [INFO][2810] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.436 [INFO][2810] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"c37eda03-464c-4d96-9ada-29c3d253b3a0", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24", Pod:"calico-kube-controllers-74b9887bb6-bt4ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidad1d0801c7", MAC:"16:e0:f9:ab:bb:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.447195 env[1113]: 2023-10-02 19:08:46.442 [INFO][2810] k8s.go 489: Wrote updated endpoint to datastore ContainerID="12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-bt4ql" WorkloadEndpoint="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:08:46.461000 audit[2898]: NETFILTER_CFG table=filter:69 family=2 entries=36 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:46.461000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffe782a3d50 a2=0 a3=7ffe782a3d3c items=0 ppid=2534 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.461000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:46.469685 env[1113]: time="2023-10-02T19:08:46.469313194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:46.469685 env[1113]: time="2023-10-02T19:08:46.469363059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:46.469685 env[1113]: time="2023-10-02T19:08:46.469375853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:46.469685 env[1113]: time="2023-10-02T19:08:46.469500559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24 pid=2902 runtime=io.containerd.runc.v2 Oct 2 19:08:46.488649 systemd[1]: Started cri-containerd-12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24.scope. Oct 2 19:08:46.504777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3c27bb5bc97: link becomes ready Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.517000 audit: BPF prog-id=96 op=LOAD Oct 2 19:08:46.517550 systemd-networkd[1020]: cali3c27bb5bc97: Link UP Oct 2 19:08:46.517561 systemd-networkd[1020]: cali3c27bb5bc97: Gained carrier Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2902 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132613864373431666333636630326235653461333664326136353264 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2902 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132613864373431666333636630326235653461333664326136353264 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit: BPF prog-id=97 op=LOAD Oct 2 19:08:46.518000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000308510 items=0 ppid=2902 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132613864373431666333636630326235653461333664326136353264 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit: BPF prog-id=98 op=LOAD Oct 2 19:08:46.518000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000308558 items=0 ppid=2902 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132613864373431666333636630326235653461333664326136353264 Oct 2 19:08:46.518000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:08:46.518000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { perfmon } for pid=2912 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit[2912]: AVC avc: denied { bpf } for pid=2912 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.518000 audit: BPF prog-id=99 op=LOAD Oct 2 19:08:46.518000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000308968 items=0 ppid=2902 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132613864373431666333636630326235653461333664326136353264 Oct 2 19:08:46.520341 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.383 [INFO][2830] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0 coredns-5dd5756b68- kube-system 2b42bb8c-9ba9-4810-bfaf-54be6161be63 973 0 2023-10-02 19:07:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.46 coredns-5dd5756b68-8glxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c27bb5bc97 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.384 [INFO][2830] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.445 [INFO][2865] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" HandleID="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.460 [INFO][2865] ipam_plugin.go 269: Auto assigning IP ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" HandleID="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000468fb0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.46", "pod":"coredns-5dd5756b68-8glxb", "timestamp":"2023-10-02 19:08:46.445502231 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:46.535774 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:46.535774 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.460 [INFO][2865] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.469 [INFO][2865] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.474 [INFO][2865] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.481 [INFO][2865] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.484 [INFO][2865] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.486 [INFO][2865] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.486 [INFO][2865] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.491 [INFO][2865] ipam.go 1682: Creating new handle: k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110 Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.495 [INFO][2865] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.499 [INFO][2865] ipam.go 1216: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.499 [INFO][2865] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" host="10.0.0.46" Oct 2 19:08:46.535774 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:46.535774 env[1113]: 2023-10-02 19:08:46.499 [INFO][2865] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" HandleID="k8s-pod-network.a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.502 [INFO][2830] k8s.go 383: Populated endpoint ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b42bb8c-9ba9-4810-bfaf-54be6161be63", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"coredns-5dd5756b68-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c27bb5bc97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.502 [INFO][2830] k8s.go 384: Calico CNI using IPs: [192.168.106.130/32] ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.502 [INFO][2830] dataplane_linux.go 68: Setting the host side veth name to cali3c27bb5bc97 ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.505 [INFO][2830] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.517 [INFO][2830] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b42bb8c-9ba9-4810-bfaf-54be6161be63", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110", Pod:"coredns-5dd5756b68-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c27bb5bc97", MAC:"da:40:fc:43:e3:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.536609 env[1113]: 2023-10-02 19:08:46.533 [INFO][2830] k8s.go 489: Wrote updated endpoint to datastore ContainerID="a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110" Namespace="kube-system" Pod="coredns-5dd5756b68-8glxb" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:46.550000 audit[2950]: NETFILTER_CFG table=filter:70 family=2 entries=40 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:46.550000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffe06853010 a2=0 a3=7ffe06852ffc items=0 ppid=2534 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.550000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:46.553352 env[1113]: time="2023-10-02T19:08:46.553105870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:46.553352 env[1113]: time="2023-10-02T19:08:46.553171695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:46.553352 env[1113]: time="2023-10-02T19:08:46.553192384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:46.553644 env[1113]: time="2023-10-02T19:08:46.553408663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110 pid=2958 runtime=io.containerd.runc.v2 Oct 2 19:08:46.582868 systemd[1]: Started cri-containerd-a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110.scope. Oct 2 19:08:46.592055 env[1113]: time="2023-10-02T19:08:46.592000769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-bt4ql,Uid:c37eda03-464c-4d96-9ada-29c3d253b3a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24\"" Oct 2 19:08:46.594078 env[1113]: time="2023-10-02T19:08:46.594045224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.25.0\"" Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.596000 audit: BPF prog-id=100 op=LOAD Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2958 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131303234333830393538656434376261353762336266383734623061 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598798 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali89c9056dde6: link becomes ready Oct 2 19:08:46.597000 audit[2966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2958 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131303234333830393538656434376261353762336266383734623061 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit: BPF prog-id=101 op=LOAD Oct 2 19:08:46.597000 audit[2966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003047d0 items=0 ppid=2958 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131303234333830393538656434376261353762336266383734623061 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.597000 audit: BPF prog-id=102 op=LOAD Oct 2 19:08:46.597000 audit[2966]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000304818 items=0 ppid=2958 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131303234333830393538656434376261353762336266383734623061 Oct 2 19:08:46.597000 audit: BPF prog-id=102 op=UNLOAD Oct 2 19:08:46.598000 audit: BPF prog-id=101 op=UNLOAD Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.598000 audit: BPF prog-id=103 op=LOAD Oct 2 19:08:46.598000 audit[2966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000304c28 items=0 ppid=2958 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131303234333830393538656434376261353762336266383734623061 Oct 2 19:08:46.599710 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:08:46.603456 systemd-networkd[1020]: cali89c9056dde6: Link UP Oct 2 19:08:46.603463 systemd-networkd[1020]: cali89c9056dde6: Gained carrier Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.387 [INFO][2841] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-csi--node--driver--2ckzv-eth0 csi-node-driver- calico-system 20101097-40e7-4d0a-a992-23f4379dc0f4 974 0 2023-10-02 19:08:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6b49688c47 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.46 csi-node-driver-2ckzv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali89c9056dde6 [] []}} ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.387 [INFO][2841] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.449 [INFO][2866] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" HandleID="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.470 [INFO][2866] ipam_plugin.go 269: Auto assigning IP ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" HandleID="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c4d70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.46", "pod":"csi-node-driver-2ckzv", "timestamp":"2023-10-02 19:08:46.449392425 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:46.619155 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:46.619155 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.499 [INFO][2866] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.505 [INFO][2866] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.512 [INFO][2866] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.519 [INFO][2866] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.523 [INFO][2866] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.532 [INFO][2866] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.532 [INFO][2866] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.542 [INFO][2866] ipam.go 1682: Creating new handle: k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.577 [INFO][2866] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.593 [INFO][2866] ipam.go 1216: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.594 [INFO][2866] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" host="10.0.0.46" Oct 2 19:08:46.619155 env[1113]: time="2023-10-02T19:08:46Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:46.619155 env[1113]: 2023-10-02 19:08:46.594 [INFO][2866] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" HandleID="k8s-pod-network.8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.596 [INFO][2841] k8s.go 383: Populated endpoint ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-csi--node--driver--2ckzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20101097-40e7-4d0a-a992-23f4379dc0f4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"csi-node-driver-2ckzv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali89c9056dde6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.596 [INFO][2841] k8s.go 384: Calico CNI using IPs: [192.168.106.131/32] ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.596 [INFO][2841] dataplane_linux.go 68: Setting the host side veth name to cali89c9056dde6 ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.598 [INFO][2841] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.603 [INFO][2841] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-csi--node--driver--2ckzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20101097-40e7-4d0a-a992-23f4379dc0f4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed", Pod:"csi-node-driver-2ckzv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali89c9056dde6", MAC:"fe:6d:4d:62:02:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:46.619872 env[1113]: 2023-10-02 19:08:46.615 [INFO][2841] k8s.go 489: Wrote updated endpoint to datastore ContainerID="8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed" Namespace="calico-system" Pod="csi-node-driver-2ckzv" WorkloadEndpoint="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:08:46.635196 env[1113]: time="2023-10-02T19:08:46.635146905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8glxb,Uid:2b42bb8c-9ba9-4810-bfaf-54be6161be63,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110\"" Oct 2 19:08:46.636127 kubelet[1417]: E1002 19:08:46.636104 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:46.639000 audit[3010]: NETFILTER_CFG table=filter:71 family=2 entries=38 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:46.639000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=19508 a0=3 a1=7fff2365a330 a2=0 a3=7fff2365a31c items=0 ppid=2534 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.639000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:46.781161 env[1113]: time="2023-10-02T19:08:46.781068011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:46.781161 env[1113]: time="2023-10-02T19:08:46.781117775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:46.781161 env[1113]: time="2023-10-02T19:08:46.781129538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:46.781357 env[1113]: time="2023-10-02T19:08:46.781281175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed pid=3018 runtime=io.containerd.runc.v2 Oct 2 19:08:46.793173 systemd[1]: Started cri-containerd-8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed.scope. Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.803000 audit: BPF prog-id=104 op=LOAD Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3018 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313861643566326463663032383436343337313363393835316435 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=3018 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313861643566326463663032383436343337313363393835316435 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.804000 audit: BPF prog-id=105 op=LOAD Oct 2 19:08:46.804000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0001dda00 items=0 ppid=3018 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313861643566326463663032383436343337313363393835316435 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.805000 audit: BPF prog-id=106 op=LOAD Oct 2 19:08:46.805000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0001dda48 items=0 ppid=3018 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313861643566326463663032383436343337313363393835316435 Oct 2 19:08:46.806000 audit: BPF prog-id=106 op=UNLOAD Oct 2 19:08:46.806000 audit: BPF prog-id=105 op=UNLOAD Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { perfmon } for pid=3028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit[3028]: AVC avc: denied { bpf } for pid=3028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:46.806000 audit: BPF prog-id=107 op=LOAD Oct 2 19:08:46.806000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0001dde58 items=0 ppid=3018 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:46.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836313861643566326463663032383436343337313363393835316435 Oct 2 19:08:46.808273 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:08:46.817170 env[1113]: time="2023-10-02T19:08:46.817119095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2ckzv,Uid:20101097-40e7-4d0a-a992-23f4379dc0f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed\"" Oct 2 19:08:46.902865 kubelet[1417]: E1002 19:08:46.902802 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:47.101794 env[1113]: time="2023-10-02T19:08:47.101728138Z" level=info msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.291 [INFO][3066] k8s.go 576: Cleaning up netns ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.291 [INFO][3066] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" iface="eth0" netns="/var/run/netns/cni-e90741ea-8834-f1cb-dc0d-6de70d497480" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.292 [INFO][3066] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" iface="eth0" netns="/var/run/netns/cni-e90741ea-8834-f1cb-dc0d-6de70d497480" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.292 [INFO][3066] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" iface="eth0" netns="/var/run/netns/cni-e90741ea-8834-f1cb-dc0d-6de70d497480" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.292 [INFO][3066] k8s.go 583: Releasing IP address(es) ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.292 [INFO][3066] utils.go 196: Calico CNI releasing IP address ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.312 [INFO][3074] ipam_plugin.go 416: Releasing address using handleID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.327934 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:47.327934 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.320 [WARNING][3074] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.320 [INFO][3074] ipam_plugin.go 444: Releasing address using workloadID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.327934 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:47.327934 env[1113]: 2023-10-02 19:08:47.326 [INFO][3066] k8s.go 589: Teardown processing complete. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:47.329178 env[1113]: time="2023-10-02T19:08:47.329123734Z" level=info msg="TearDown network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" successfully" Oct 2 19:08:47.329178 env[1113]: time="2023-10-02T19:08:47.329166085Z" level=info msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" returns successfully" Oct 2 19:08:47.329941 env[1113]: time="2023-10-02T19:08:47.329912535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-54ds6,Uid:486964bf-aef1-40b3-8363-5586f9f415ec,Namespace:default,Attempt:1,}" Oct 2 19:08:47.452383 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:08:47.452541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali975debe7355: link becomes ready Oct 2 19:08:47.461697 systemd-networkd[1020]: cali975debe7355: Link UP Oct 2 19:08:47.461706 systemd-networkd[1020]: cali975debe7355: Gained carrier Oct 2 19:08:47.468426 systemd[1]: run-containerd-runc-k8s.io-a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110-runc.W86dGr.mount: Deactivated successfully. Oct 2 19:08:47.468553 systemd[1]: run-netns-cni\x2de90741ea\x2d8834\x2df1cb\x2ddc0d\x2d6de70d497480.mount: Deactivated successfully. Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.375 [INFO][3081] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0 nginx-deployment-6d5f899847- default 486964bf-aef1-40b3-8363-5586f9f415ec 990 0 2023-10-02 19:08:33 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.46 nginx-deployment-6d5f899847-54ds6 eth0 default [] [] [kns.default ksa.default.default] cali975debe7355 [] []}} ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.375 [INFO][3081] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.404 [INFO][3094] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.419 [INFO][3094] ipam_plugin.go 269: Auto assigning IP ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e6d80), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.46", "pod":"nginx-deployment-6d5f899847-54ds6", "timestamp":"2023-10-02 19:08:47.40404571 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:47.480840 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:47.480840 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.419 [INFO][3094] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.422 [INFO][3094] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.428 [INFO][3094] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.433 [INFO][3094] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.436 [INFO][3094] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.438 [INFO][3094] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.438 [INFO][3094] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.440 [INFO][3094] ipam.go 1682: Creating new handle: k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.442 [INFO][3094] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.447 [INFO][3094] ipam.go 1216: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.447 [INFO][3094] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" host="10.0.0.46" Oct 2 19:08:47.480840 env[1113]: time="2023-10-02T19:08:47Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:47.480840 env[1113]: 2023-10-02 19:08:47.447 [INFO][3094] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.449 [INFO][3081] k8s.go 383: Populated endpoint ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"486964bf-aef1-40b3-8363-5586f9f415ec", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"nginx-deployment-6d5f899847-54ds6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali975debe7355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.449 [INFO][3081] k8s.go 384: Calico CNI using IPs: [192.168.106.132/32] ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.449 [INFO][3081] dataplane_linux.go 68: Setting the host side veth name to cali975debe7355 ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.452 [INFO][3081] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.462 [INFO][3081] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"486964bf-aef1-40b3-8363-5586f9f415ec", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f", Pod:"nginx-deployment-6d5f899847-54ds6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali975debe7355", MAC:"92:ce:de:a5:1a:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:47.481422 env[1113]: 2023-10-02 19:08:47.475 [INFO][3081] k8s.go 489: Wrote updated endpoint to datastore ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Namespace="default" Pod="nginx-deployment-6d5f899847-54ds6" WorkloadEndpoint="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:47.482000 audit[3106]: NETFILTER_CFG table=filter:72 family=2 entries=48 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:47.484766 kernel: kauditd_printk_skb: 331 callbacks suppressed Oct 2 19:08:47.484841 kernel: audit: type=1325 audit(1696273727.482:782): table=filter:72 family=2 entries=48 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:47.482000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=23440 a0=3 a1=7ffed2409a20 a2=0 a3=7ffed2409a0c items=0 ppid=2534 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.490854 kernel: audit: type=1300 audit(1696273727.482:782): arch=c000003e syscall=46 success=yes exit=23440 a0=3 a1=7ffed2409a20 a2=0 a3=7ffed2409a0c items=0 ppid=2534 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.490910 kernel: audit: type=1327 audit(1696273727.482:782): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:47.482000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:47.495548 env[1113]: time="2023-10-02T19:08:47.495452173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:47.495813 env[1113]: time="2023-10-02T19:08:47.495757581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:47.495813 env[1113]: time="2023-10-02T19:08:47.495785633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:47.496227 env[1113]: time="2023-10-02T19:08:47.496154129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f pid=3124 runtime=io.containerd.runc.v2 Oct 2 19:08:47.511117 systemd[1]: Started cri-containerd-07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f.scope. Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.527300 kernel: audit: type=1400 audit(1696273727.520:783): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.527380 kernel: audit: type=1400 audit(1696273727.520:784): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.527407 kernel: audit: type=1400 audit(1696273727.520:785): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.533715 kernel: audit: type=1400 audit(1696273727.520:786): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.533873 kernel: audit: type=1400 audit(1696273727.520:787): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.535640 kernel: audit: type=1400 audit(1696273727.520:788): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.538606 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:08:47.541359 kernel: audit: type=1400 audit(1696273727.520:789): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.520000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit: BPF prog-id=108 op=LOAD Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=3124 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646665613434643065643164383830353836666165303164373031 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=3124 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646665613434643065643164383830353836666165303164373031 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.521000 audit: BPF prog-id=109 op=LOAD Oct 2 19:08:47.521000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c0001b3050 items=0 ppid=3124 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646665613434643065643164383830353836666165303164373031 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.526000 audit: BPF prog-id=110 op=LOAD Oct 2 19:08:47.526000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c0001b3098 items=0 ppid=3124 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646665613434643065643164383830353836666165303164373031 Oct 2 19:08:47.529000 audit: BPF prog-id=110 op=UNLOAD Oct 2 19:08:47.529000 audit: BPF prog-id=109 op=UNLOAD Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { perfmon } for pid=3134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit[3134]: AVC avc: denied { bpf } for pid=3134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:47.529000 audit: BPF prog-id=111 op=LOAD Oct 2 19:08:47.529000 audit[3134]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c0001b34a8 items=0 ppid=3124 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:47.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646665613434643065643164383830353836666165303164373031 Oct 2 19:08:47.564825 env[1113]: time="2023-10-02T19:08:47.564693776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-54ds6,Uid:486964bf-aef1-40b3-8363-5586f9f415ec,Namespace:default,Attempt:1,} returns sandbox id \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\"" Oct 2 19:08:47.815965 systemd-networkd[1020]: calidad1d0801c7: Gained IPv6LL Oct 2 19:08:47.904029 kubelet[1417]: E1002 19:08:47.903950 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:47.943970 systemd-networkd[1020]: cali3c27bb5bc97: Gained IPv6LL Oct 2 19:08:48.008043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113630346.mount: Deactivated successfully. Oct 2 19:08:48.263868 systemd-networkd[1020]: cali89c9056dde6: Gained IPv6LL Oct 2 19:08:48.776035 systemd-networkd[1020]: cali975debe7355: Gained IPv6LL Oct 2 19:08:48.904715 kubelet[1417]: E1002 19:08:48.904665 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:49.905060 kubelet[1417]: E1002 19:08:49.904988 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:50.520587 kubelet[1417]: E1002 19:08:50.520551 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:50.906120 kubelet[1417]: E1002 19:08:50.906018 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:51.635919 env[1113]: time="2023-10-02T19:08:51.635844383Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:51.638841 env[1113]: time="2023-10-02T19:08:51.638791141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:51.643657 env[1113]: time="2023-10-02T19:08:51.641109685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:51.644001 env[1113]: time="2023-10-02T19:08:51.643967776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.25.0\" returns image reference \"sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78\"" Oct 2 19:08:51.644617 env[1113]: time="2023-10-02T19:08:51.644592855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Oct 2 19:08:51.645286 env[1113]: time="2023-10-02T19:08:51.645228394Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:b764feb1777655aabce5988324b69b412d23e087436ee2414dff893a158fcdef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:51.646052 env[1113]: time="2023-10-02T19:08:51.645984570Z" level=info msg="CreateContainer within sandbox \"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 2 19:08:51.655960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589770603.mount: Deactivated successfully. Oct 2 19:08:51.660209 env[1113]: time="2023-10-02T19:08:51.660139640Z" level=info msg="CreateContainer within sandbox \"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3\"" Oct 2 19:08:51.660930 env[1113]: time="2023-10-02T19:08:51.660898281Z" level=info msg="StartContainer for \"7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3\"" Oct 2 19:08:51.680434 systemd[1]: Started cri-containerd-7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3.scope. Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit: BPF prog-id=112 op=LOAD Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2902 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:51.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764376331353738363938613362636664353165666234323237323934 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2902 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:51.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764376331353738363938613362636664353165666234323237323934 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit: BPF prog-id=113 op=LOAD Oct 2 19:08:51.694000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000306c70 items=0 ppid=2902 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:51.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764376331353738363938613362636664353165666234323237323934 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit: BPF prog-id=114 op=LOAD Oct 2 19:08:51.694000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000306cb8 items=0 ppid=2902 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:51.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764376331353738363938613362636664353165666234323237323934 Oct 2 19:08:51.694000 audit: BPF prog-id=114 op=UNLOAD Oct 2 19:08:51.694000 audit: BPF prog-id=113 op=UNLOAD Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { perfmon } for pid=3200 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit[3200]: AVC avc: denied { bpf } for pid=3200 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:51.694000 audit: BPF prog-id=115 op=LOAD Oct 2 19:08:51.694000 audit[3200]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003070c8 items=0 ppid=2902 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:51.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764376331353738363938613362636664353165666234323237323934 Oct 2 19:08:51.718769 env[1113]: time="2023-10-02T19:08:51.718670356Z" level=info msg="StartContainer for \"7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3\" returns successfully" Oct 2 19:08:51.907011 kubelet[1417]: E1002 19:08:51.906863 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:52.272296 kubelet[1417]: I1002 19:08:52.271742 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" podStartSLOduration=58.220926175 podCreationTimestamp="2023-10-02 19:07:49 +0000 UTC" firstStartedPulling="2023-10-02 19:08:46.593620471 +0000 UTC m=+48.095658414" lastFinishedPulling="2023-10-02 19:08:51.644346049 +0000 UTC m=+53.146384012" observedRunningTime="2023-10-02 19:08:52.267504283 +0000 UTC m=+53.769542226" watchObservedRunningTime="2023-10-02 19:08:52.271651773 +0000 UTC m=+53.773689716" Oct 2 19:08:52.907574 kubelet[1417]: E1002 19:08:52.907502 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:53.102032 env[1113]: time="2023-10-02T19:08:53.101854461Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:08:53.188647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762936056.mount: Deactivated successfully. Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.149 [INFO][3274] k8s.go 576: Cleaning up netns ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.150 [INFO][3274] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" iface="eth0" netns="/var/run/netns/cni-51cb2d41-4240-c8d6-fe7b-d34223e523dc" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.150 [INFO][3274] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" iface="eth0" netns="/var/run/netns/cni-51cb2d41-4240-c8d6-fe7b-d34223e523dc" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.151 [INFO][3274] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" iface="eth0" netns="/var/run/netns/cni-51cb2d41-4240-c8d6-fe7b-d34223e523dc" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.151 [INFO][3274] k8s.go 583: Releasing IP address(es) ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.151 [INFO][3274] utils.go 196: Calico CNI releasing IP address ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.174 [INFO][3281] ipam_plugin.go 416: Releasing address using handleID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.190377 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:53.190377 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.183 [WARNING][3281] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.183 [INFO][3281] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.190377 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:53.190377 env[1113]: 2023-10-02 19:08:53.189 [INFO][3274] k8s.go 589: Teardown processing complete. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:08:53.191838 systemd[1]: run-netns-cni\x2d51cb2d41\x2d4240\x2dc8d6\x2dfe7b\x2dd34223e523dc.mount: Deactivated successfully. Oct 2 19:08:53.191967 env[1113]: time="2023-10-02T19:08:53.191875318Z" level=info msg="TearDown network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" successfully" Oct 2 19:08:53.191967 env[1113]: time="2023-10-02T19:08:53.191919131Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" returns successfully" Oct 2 19:08:53.192275 kubelet[1417]: E1002 19:08:53.192248 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:53.192669 env[1113]: time="2023-10-02T19:08:53.192643637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jw66,Uid:059f120d-41c2-40ee-916b-51ed03391c22,Namespace:kube-system,Attempt:1,}" Oct 2 19:08:53.719762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:08:53.719967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali627fa8ea144: link becomes ready Oct 2 19:08:53.726436 systemd-networkd[1020]: cali627fa8ea144: Link UP Oct 2 19:08:53.726445 systemd-networkd[1020]: cali627fa8ea144: Gained carrier Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.615 [INFO][3288] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0 coredns-5dd5756b68- kube-system 059f120d-41c2-40ee-916b-51ed03391c22 1027 0 2023-10-02 19:07:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.46 coredns-5dd5756b68-9jw66 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali627fa8ea144 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.615 [INFO][3288] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.653 [INFO][3302] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" HandleID="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.669 [INFO][3302] ipam_plugin.go 269: Auto assigning IP ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" HandleID="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c4e40), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.46", "pod":"coredns-5dd5756b68-9jw66", "timestamp":"2023-10-02 19:08:53.653521243 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:53.769510 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:53.769510 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.670 [INFO][3302] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.672 [INFO][3302] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.681 [INFO][3302] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.687 [INFO][3302] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.689 [INFO][3302] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.691 [INFO][3302] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.691 [INFO][3302] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.693 [INFO][3302] ipam.go 1682: Creating new handle: k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591 Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.697 [INFO][3302] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.715 [INFO][3302] ipam.go 1216: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.715 [INFO][3302] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" host="10.0.0.46" Oct 2 19:08:53.769510 env[1113]: time="2023-10-02T19:08:53Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:53.769510 env[1113]: 2023-10-02 19:08:53.715 [INFO][3302] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" HandleID="k8s-pod-network.4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.717 [INFO][3288] k8s.go 383: Populated endpoint ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"059f120d-41c2-40ee-916b-51ed03391c22", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"coredns-5dd5756b68-9jw66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali627fa8ea144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.717 [INFO][3288] k8s.go 384: Calico CNI using IPs: [192.168.106.133/32] ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.717 [INFO][3288] dataplane_linux.go 68: Setting the host side veth name to cali627fa8ea144 ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.719 [INFO][3288] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.726 [INFO][3288] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"059f120d-41c2-40ee-916b-51ed03391c22", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591", Pod:"coredns-5dd5756b68-9jw66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali627fa8ea144", MAC:"c6:4c:10:7e:af:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:53.770415 env[1113]: 2023-10-02 19:08:53.766 [INFO][3288] k8s.go 489: Wrote updated endpoint to datastore ContainerID="4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591" Namespace="kube-system" Pod="coredns-5dd5756b68-9jw66" WorkloadEndpoint="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:08:53.792072 kernel: kauditd_printk_skb: 107 callbacks suppressed Oct 2 19:08:53.792228 kernel: audit: type=1325 audit(1696273733.784:819): table=filter:73 family=2 entries=42 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:53.792254 kernel: audit: type=1300 audit(1696273733.784:819): arch=c000003e syscall=46 success=yes exit=20276 a0=3 a1=7ffe67aed160 a2=0 a3=7ffe67aed14c items=0 ppid=2534 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:53.792271 kernel: audit: type=1327 audit(1696273733.784:819): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:53.784000 audit[3324]: NETFILTER_CFG table=filter:73 family=2 entries=42 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:53.784000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=20276 a0=3 a1=7ffe67aed160 a2=0 a3=7ffe67aed14c items=0 ppid=2534 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:53.784000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:53.908234 kubelet[1417]: E1002 19:08:53.908188 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:54.103312 env[1113]: time="2023-10-02T19:08:54.103226280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:08:54.103312 env[1113]: time="2023-10-02T19:08:54.103267908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:08:54.103312 env[1113]: time="2023-10-02T19:08:54.103281515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:08:54.103791 env[1113]: time="2023-10-02T19:08:54.103465350Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591 pid=3334 runtime=io.containerd.runc.v2 Oct 2 19:08:54.115785 systemd[1]: Started cri-containerd-4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591.scope. Oct 2 19:08:54.134446 kernel: audit: type=1400 audit(1696273734.128:820): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134587 kernel: audit: type=1400 audit(1696273734.128:821): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134617 kernel: audit: type=1400 audit(1696273734.128:822): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.139303 kernel: audit: type=1400 audit(1696273734.128:823): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.139363 kernel: audit: type=1400 audit(1696273734.128:824): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.144531 kernel: audit: type=1400 audit(1696273734.128:825): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.144584 kernel: audit: type=1400 audit(1696273734.128:826): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.144713 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.133000 audit: BPF prog-id=116 op=LOAD Oct 2 19:08:54.133000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.133000 audit[3343]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=3334 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462373039393364663238333338376666386135346361363764316135 Oct 2 19:08:54.133000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.133000 audit[3343]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=3334 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462373039393364663238333338376666386135346361363764316135 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.134000 audit: BPF prog-id=117 op=LOAD Oct 2 19:08:54.134000 audit[3343]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024860 items=0 ppid=3334 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462373039393364663238333338376666386135346361363764316135 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.136000 audit: BPF prog-id=118 op=LOAD Oct 2 19:08:54.136000 audit[3343]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0000248a8 items=0 ppid=3334 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462373039393364663238333338376666386135346361363764316135 Oct 2 19:08:54.138000 audit: BPF prog-id=118 op=UNLOAD Oct 2 19:08:54.138000 audit: BPF prog-id=117 op=UNLOAD Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { perfmon } for pid=3343 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit[3343]: AVC avc: denied { bpf } for pid=3343 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.138000 audit: BPF prog-id=119 op=LOAD Oct 2 19:08:54.138000 audit[3343]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000024cb8 items=0 ppid=3334 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462373039393364663238333338376666386135346361363764316135 Oct 2 19:08:54.202649 env[1113]: time="2023-10-02T19:08:54.202585008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jw66,Uid:059f120d-41c2-40ee-916b-51ed03391c22,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591\"" Oct 2 19:08:54.203715 kubelet[1417]: E1002 19:08:54.203688 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:54.214113 env[1113]: time="2023-10-02T19:08:54.214051595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:54.219448 env[1113]: time="2023-10-02T19:08:54.219361631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:54.221354 env[1113]: time="2023-10-02T19:08:54.221313269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:54.222904 env[1113]: time="2023-10-02T19:08:54.222853702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:08:54.223289 env[1113]: time="2023-10-02T19:08:54.223244378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Oct 2 19:08:54.224018 env[1113]: time="2023-10-02T19:08:54.223989392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:08:54.225258 env[1113]: time="2023-10-02T19:08:54.225215442Z" level=info msg="CreateContainer within sandbox \"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 2 19:08:54.241608 env[1113]: time="2023-10-02T19:08:54.241546586Z" level=info msg="CreateContainer within sandbox \"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d2b764d0dcca3331a7655ab90978ce20577bec6c3e64008aa68dee1d7d3af89\"" Oct 2 19:08:54.242230 env[1113]: time="2023-10-02T19:08:54.242200979Z" level=info msg="StartContainer for \"2d2b764d0dcca3331a7655ab90978ce20577bec6c3e64008aa68dee1d7d3af89\"" Oct 2 19:08:54.263228 systemd[1]: Started cri-containerd-2d2b764d0dcca3331a7655ab90978ce20577bec6c3e64008aa68dee1d7d3af89.scope. Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.274000 audit: BPF prog-id=120 op=LOAD Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2958 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326237363464306463636133333331613736353561623930393738 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2958 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326237363464306463636133333331613736353561623930393738 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit: BPF prog-id=121 op=LOAD Oct 2 19:08:54.275000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000024ff0 items=0 ppid=2958 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326237363464306463636133333331613736353561623930393738 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit: BPF prog-id=122 op=LOAD Oct 2 19:08:54.275000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000025038 items=0 ppid=2958 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326237363464306463636133333331613736353561623930393738 Oct 2 19:08:54.275000 audit: BPF prog-id=122 op=UNLOAD Oct 2 19:08:54.275000 audit: BPF prog-id=121 op=UNLOAD Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { perfmon } for pid=3377 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit[3377]: AVC avc: denied { bpf } for pid=3377 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:08:54.275000 audit: BPF prog-id=123 op=LOAD Oct 2 19:08:54.275000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000025448 items=0 ppid=2958 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:54.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326237363464306463636133333331613736353561623930393738 Oct 2 19:08:54.291156 env[1113]: time="2023-10-02T19:08:54.291097999Z" level=info msg="StartContainer for \"2d2b764d0dcca3331a7655ab90978ce20577bec6c3e64008aa68dee1d7d3af89\" returns successfully" Oct 2 19:08:54.458863 env[1113]: time="2023-10-02T19:08:54.458564095Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:08:54.459913 env[1113]: time="2023-10-02T19:08:54.459851572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:08:54.460930 kubelet[1417]: E1002 19:08:54.460260 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:08:54.460930 kubelet[1417]: E1002 19:08:54.460314 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:08:54.461094 kubelet[1417]: E1002 19:08:54.460544 1417 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:08:54.461713 env[1113]: time="2023-10-02T19:08:54.461651814Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Oct 2 19:08:54.908949 kubelet[1417]: E1002 19:08:54.908888 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:55.185077 systemd[1]: run-containerd-runc-k8s.io-2d2b764d0dcca3331a7655ab90978ce20577bec6c3e64008aa68dee1d7d3af89-runc.Zexfkk.mount: Deactivated successfully. Oct 2 19:08:55.273392 kubelet[1417]: E1002 19:08:55.273242 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:55.290040 kubelet[1417]: I1002 19:08:55.289967 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8glxb" podStartSLOduration=71.70307166 podCreationTimestamp="2023-10-02 19:07:36 +0000 UTC" firstStartedPulling="2023-10-02 19:08:46.636779552 +0000 UTC m=+48.138817495" lastFinishedPulling="2023-10-02 19:08:54.223613424 +0000 UTC m=+55.725651367" observedRunningTime="2023-10-02 19:08:55.28954881 +0000 UTC m=+56.791586753" watchObservedRunningTime="2023-10-02 19:08:55.289905532 +0000 UTC m=+56.791943475" Oct 2 19:08:55.491000 audit[3411]: NETFILTER_CFG table=filter:74 family=2 entries=14 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:55.491000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7fff92cd2840 a2=0 a3=7fff92cd282c items=0 ppid=1618 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:55.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:55.492000 audit[3411]: NETFILTER_CFG table=nat:75 family=2 entries=14 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:55.492000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=3300 a0=3 a1=7fff92cd2840 a2=0 a3=31030 items=0 ppid=1618 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:55.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:55.508000 audit[3413]: NETFILTER_CFG table=filter:76 family=2 entries=11 op=nft_register_rule pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:55.508000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=2844 a0=3 a1=7ffed8e2edb0 a2=0 a3=7ffed8e2ed9c items=0 ppid=1618 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:55.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:55.509000 audit[3413]: NETFILTER_CFG table=nat:77 family=2 entries=35 op=nft_register_chain pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:55.509000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=13788 a0=3 a1=7ffed8e2edb0 a2=0 a3=7ffed8e2ed9c items=0 ppid=1618 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:55.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:55.559933 systemd-networkd[1020]: cali627fa8ea144: Gained IPv6LL Oct 2 19:08:55.569315 kubelet[1417]: I1002 19:08:55.569266 1417 topology_manager.go:215] "Topology Admit Handler" podUID="923160b0-11c2-47d7-a5f7-1797d0326d64" podNamespace="calico-apiserver" podName="calico-apiserver-545f75f4b-lfpx8" Oct 2 19:08:55.572429 kubelet[1417]: I1002 19:08:55.572397 1417 topology_manager.go:215] "Topology Admit Handler" podUID="7b5e0374-3867-4271-8e1f-18cd2d2377f8" podNamespace="calico-apiserver" podName="calico-apiserver-545f75f4b-g2fhn" Oct 2 19:08:55.575278 systemd[1]: Created slice kubepods-besteffort-pod923160b0_11c2_47d7_a5f7_1797d0326d64.slice. Oct 2 19:08:55.579422 systemd[1]: Created slice kubepods-besteffort-pod7b5e0374_3867_4271_8e1f_18cd2d2377f8.slice. Oct 2 19:08:55.728533 kubelet[1417]: I1002 19:08:55.728182 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85nm\" (UniqueName: \"kubernetes.io/projected/923160b0-11c2-47d7-a5f7-1797d0326d64-kube-api-access-g85nm\") pod \"calico-apiserver-545f75f4b-lfpx8\" (UID: \"923160b0-11c2-47d7-a5f7-1797d0326d64\") " pod="calico-apiserver/calico-apiserver-545f75f4b-lfpx8" Oct 2 19:08:55.728533 kubelet[1417]: I1002 19:08:55.728264 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs\") pod \"calico-apiserver-545f75f4b-g2fhn\" (UID: \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\") " pod="calico-apiserver/calico-apiserver-545f75f4b-g2fhn" Oct 2 19:08:55.728533 kubelet[1417]: I1002 19:08:55.728289 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62c5\" (UniqueName: \"kubernetes.io/projected/7b5e0374-3867-4271-8e1f-18cd2d2377f8-kube-api-access-q62c5\") pod \"calico-apiserver-545f75f4b-g2fhn\" (UID: \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\") " pod="calico-apiserver/calico-apiserver-545f75f4b-g2fhn" Oct 2 19:08:55.728533 kubelet[1417]: I1002 19:08:55.728318 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs\") pod \"calico-apiserver-545f75f4b-lfpx8\" (UID: \"923160b0-11c2-47d7-a5f7-1797d0326d64\") " pod="calico-apiserver/calico-apiserver-545f75f4b-lfpx8" Oct 2 19:08:55.828779 kubelet[1417]: E1002 19:08:55.828726 1417 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 2 19:08:55.828938 kubelet[1417]: E1002 19:08:55.828845 1417 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs podName:923160b0-11c2-47d7-a5f7-1797d0326d64 nodeName:}" failed. No retries permitted until 2023-10-02 19:08:56.328820651 +0000 UTC m=+57.830858594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs") pod "calico-apiserver-545f75f4b-lfpx8" (UID: "923160b0-11c2-47d7-a5f7-1797d0326d64") : secret "calico-apiserver-certs" not found Oct 2 19:08:55.828938 kubelet[1417]: E1002 19:08:55.828728 1417 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 2 19:08:55.828938 kubelet[1417]: E1002 19:08:55.828885 1417 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs podName:7b5e0374-3867-4271-8e1f-18cd2d2377f8 nodeName:}" failed. No retries permitted until 2023-10-02 19:08:56.328876386 +0000 UTC m=+57.830914329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs") pod "calico-apiserver-545f75f4b-g2fhn" (UID: "7b5e0374-3867-4271-8e1f-18cd2d2377f8") : secret "calico-apiserver-certs" not found Oct 2 19:08:55.909385 kubelet[1417]: E1002 19:08:55.909327 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:56.275820 kubelet[1417]: E1002 19:08:56.275695 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:56.478695 env[1113]: time="2023-10-02T19:08:56.478633698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545f75f4b-lfpx8,Uid:923160b0-11c2-47d7-a5f7-1797d0326d64,Namespace:calico-apiserver,Attempt:0,}" Oct 2 19:08:56.482095 env[1113]: time="2023-10-02T19:08:56.482071021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545f75f4b-g2fhn,Uid:7b5e0374-3867-4271-8e1f-18cd2d2377f8,Namespace:calico-apiserver,Attempt:0,}" Oct 2 19:08:56.539000 audit[3425]: NETFILTER_CFG table=filter:78 family=2 entries=9 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:56.539000 audit[3425]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffcf96c77a0 a2=0 a3=7ffcf96c778c items=0 ppid=1618 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:56.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:56.539000 audit[3425]: NETFILTER_CFG table=nat:79 family=2 entries=20 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:08:56.539000 audit[3425]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffcf96c77a0 a2=0 a3=7ffcf96c778c items=0 ppid=1618 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:56.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:08:56.909709 kubelet[1417]: E1002 19:08:56.909636 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:57.277231 kubelet[1417]: E1002 19:08:57.277114 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:08:57.910283 kubelet[1417]: E1002 19:08:57.910227 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:58.858053 kubelet[1417]: E1002 19:08:58.858001 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:58.863168 env[1113]: time="2023-10-02T19:08:58.863127759Z" level=info msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" Oct 2 19:08:58.911159 kubelet[1417]: E1002 19:08:58.911109 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.003 [WARNING][3444] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b42bb8c-9ba9-4810-bfaf-54be6161be63", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110", Pod:"coredns-5dd5756b68-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c27bb5bc97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.003 [INFO][3444] k8s.go 576: Cleaning up netns ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.003 [INFO][3444] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" iface="eth0" netns="" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.003 [INFO][3444] k8s.go 583: Releasing IP address(es) ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.003 [INFO][3444] utils.go 196: Calico CNI releasing IP address ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.023 [INFO][3451] ipam_plugin.go 416: Releasing address using handleID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.034086 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:59.034086 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.030 [WARNING][3451] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.030 [INFO][3451] ipam_plugin.go 444: Releasing address using workloadID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.034086 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:59.034086 env[1113]: 2023-10-02 19:08:59.032 [INFO][3444] k8s.go 589: Teardown processing complete. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.034800 env[1113]: time="2023-10-02T19:08:59.034120016Z" level=info msg="TearDown network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" successfully" Oct 2 19:08:59.034800 env[1113]: time="2023-10-02T19:08:59.034159951Z" level=info msg="StopPodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" returns successfully" Oct 2 19:08:59.034911 env[1113]: time="2023-10-02T19:08:59.034873384Z" level=info msg="RemovePodSandbox for \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" Oct 2 19:08:59.034958 env[1113]: time="2023-10-02T19:08:59.034919401Z" level=info msg="Forcibly stopping sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\"" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.065 [WARNING][3473] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b42bb8c-9ba9-4810-bfaf-54be6161be63", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"a1024380958ed47ba57b3bf874b0ac08668cb5c2fdd280721a6377a7509fa110", Pod:"coredns-5dd5756b68-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c27bb5bc97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.065 [INFO][3473] k8s.go 576: Cleaning up netns ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.065 [INFO][3473] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" iface="eth0" netns="" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.065 [INFO][3473] k8s.go 583: Releasing IP address(es) ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.065 [INFO][3473] utils.go 196: Calico CNI releasing IP address ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.082 [INFO][3480] ipam_plugin.go 416: Releasing address using handleID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.092563 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:59.092563 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.088 [WARNING][3480] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.089 [INFO][3480] ipam_plugin.go 444: Releasing address using workloadID ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" HandleID="k8s-pod-network.e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Workload="10.0.0.46-k8s-coredns--5dd5756b68--8glxb-eth0" Oct 2 19:08:59.092563 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:59.092563 env[1113]: 2023-10-02 19:08:59.091 [INFO][3473] k8s.go 589: Teardown processing complete. ContainerID="e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d" Oct 2 19:08:59.093151 env[1113]: time="2023-10-02T19:08:59.092590120Z" level=info msg="TearDown network for sandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" successfully" Oct 2 19:08:59.645553 env[1113]: time="2023-10-02T19:08:59.639518243Z" level=info msg="RemovePodSandbox \"e417dc94ad5d497650ce83bab65ad87e91bccd3e490a8ce34292b7aef0fc662d\" returns successfully" Oct 2 19:08:59.645553 env[1113]: time="2023-10-02T19:08:59.640187542Z" level=info msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.701 [WARNING][3534] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"486964bf-aef1-40b3-8363-5586f9f415ec", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f", Pod:"nginx-deployment-6d5f899847-54ds6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali975debe7355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.701 [INFO][3534] k8s.go 576: Cleaning up netns ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.701 [INFO][3534] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" iface="eth0" netns="" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.701 [INFO][3534] k8s.go 583: Releasing IP address(es) ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.702 [INFO][3534] utils.go 196: Calico CNI releasing IP address ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.725 [INFO][3560] ipam_plugin.go 416: Releasing address using handleID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:59.740854 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:59.740854 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.733 [WARNING][3560] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.733 [INFO][3560] ipam_plugin.go 444: Releasing address using workloadID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:08:59.740854 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:59.740854 env[1113]: 2023-10-02 19:08:59.739 [INFO][3534] k8s.go 589: Teardown processing complete. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:08:59.741538 env[1113]: time="2023-10-02T19:08:59.740909298Z" level=info msg="TearDown network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" successfully" Oct 2 19:08:59.741538 env[1113]: time="2023-10-02T19:08:59.740957780Z" level=info msg="StopPodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" returns successfully" Oct 2 19:08:59.741845 env[1113]: time="2023-10-02T19:08:59.741799283Z" level=info msg="RemovePodSandbox for \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" Oct 2 19:08:59.741930 env[1113]: time="2023-10-02T19:08:59.741842925Z" level=info msg="Forcibly stopping sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\"" Oct 2 19:08:59.794678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:08:59.794831 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibf780e7f14b: link becomes ready Oct 2 19:08:59.802008 systemd-networkd[1020]: calibf780e7f14b: Link UP Oct 2 19:08:59.802017 systemd-networkd[1020]: calibf780e7f14b: Gained carrier Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.704 [INFO][3533] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0 calico-apiserver-545f75f4b- calico-apiserver 7b5e0374-3867-4271-8e1f-18cd2d2377f8 1101 0 2023-10-02 19:08:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545f75f4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.46 calico-apiserver-545f75f4b-g2fhn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf780e7f14b [] []}} ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.705 [INFO][3533] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.734 [INFO][3565] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.747 [INFO][3565] ipam_plugin.go 269: Auto assigning IP ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010c6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.46", "pod":"calico-apiserver-545f75f4b-g2fhn", "timestamp":"2023-10-02 19:08:59.734651002 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:08:59.817122 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:08:59.817122 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.747 [INFO][3565] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.749 [INFO][3565] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.753 [INFO][3565] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.756 [INFO][3565] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.758 [INFO][3565] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.761 [INFO][3565] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.761 [INFO][3565] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.762 [INFO][3565] ipam.go 1682: Creating new handle: k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.766 [INFO][3565] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.781 [INFO][3565] ipam.go 1216: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.781 [INFO][3565] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" host="10.0.0.46" Oct 2 19:08:59.817122 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:08:59.817122 env[1113]: 2023-10-02 19:08:59.781 [INFO][3565] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.783 [INFO][3533] k8s.go 383: Populated endpoint ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0", GenerateName:"calico-apiserver-545f75f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b5e0374-3867-4271-8e1f-18cd2d2377f8", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545f75f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"calico-apiserver-545f75f4b-g2fhn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf780e7f14b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.783 [INFO][3533] k8s.go 384: Calico CNI using IPs: [192.168.106.134/32] ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.783 [INFO][3533] dataplane_linux.go 68: Setting the host side veth name to calibf780e7f14b ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.794 [INFO][3533] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.802 [INFO][3533] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0", GenerateName:"calico-apiserver-545f75f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b5e0374-3867-4271-8e1f-18cd2d2377f8", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545f75f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a", Pod:"calico-apiserver-545f75f4b-g2fhn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf780e7f14b", MAC:"72:25:57:e3:aa:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:08:59.820573 env[1113]: 2023-10-02 19:08:59.809 [INFO][3533] k8s.go 489: Wrote updated endpoint to datastore ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-g2fhn" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:08:59.831000 audit[3628]: NETFILTER_CFG table=filter:80 family=2 entries=63 op=nft_register_chain pid=3628 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:59.835113 kernel: kauditd_printk_skb: 125 callbacks suppressed Oct 2 19:08:59.835224 kernel: audit: type=1325 audit(1696273739.831:862): table=filter:80 family=2 entries=63 op=nft_register_chain pid=3628 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:08:59.835270 kernel: audit: type=1300 audit(1696273739.831:862): arch=c000003e syscall=46 success=yes exit=30480 a0=3 a1=7ffc443f5c50 a2=0 a3=7ffc443f5c3c items=0 ppid=2534 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:59.831000 audit[3628]: SYSCALL arch=c000003e syscall=46 success=yes exit=30480 a0=3 a1=7ffc443f5c50 a2=0 a3=7ffc443f5c3c items=0 ppid=2534 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:08:59.831000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:59.841404 kernel: audit: type=1327 audit(1696273739.831:862): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:08:59.912098 kubelet[1417]: E1002 19:08:59.911986 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:00.112217 env[1113]: time="2023-10-02T19:09:00.112141006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:09:00.112217 env[1113]: time="2023-10-02T19:09:00.112192012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:09:00.112217 env[1113]: time="2023-10-02T19:09:00.112205018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:09:00.112675 env[1113]: time="2023-10-02T19:09:00.112327998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a pid=3650 runtime=io.containerd.runc.v2 Oct 2 19:09:00.123208 systemd[1]: Started cri-containerd-72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a.scope. Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.136545 kernel: audit: type=1400 audit(1696273740.131:863): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.136585 kernel: audit: type=1400 audit(1696273740.131:864): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.136603 kernel: audit: type=1400 audit(1696273740.131:865): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.140605 kernel: audit: type=1400 audit(1696273740.131:866): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.140646 kernel: audit: type=1400 audit(1696273740.131:867): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.144502 kernel: audit: type=1400 audit(1696273740.131:868): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.146598 kernel: audit: type=1400 audit(1696273740.131:869): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.131000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.135000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.135000 audit: BPF prog-id=124 op=LOAD Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3650 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653938356166613530363730393936393231396633303933343131 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=3650 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653938356166613530363730393936393231396633303933343131 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit: BPF prog-id=125 op=LOAD Oct 2 19:09:00.137000 audit[3660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00031c450 items=0 ppid=3650 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653938356166613530363730393936393231396633303933343131 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.137000 audit: BPF prog-id=126 op=LOAD Oct 2 19:09:00.137000 audit[3660]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00031c498 items=0 ppid=3650 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653938356166613530363730393936393231396633303933343131 Oct 2 19:09:00.141000 audit: BPF prog-id=126 op=UNLOAD Oct 2 19:09:00.141000 audit: BPF prog-id=125 op=UNLOAD Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { perfmon } for pid=3660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit[3660]: AVC avc: denied { bpf } for pid=3660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:00.141000 audit: BPF prog-id=127 op=LOAD Oct 2 19:09:00.141000 audit[3660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00031c8a8 items=0 ppid=3650 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653938356166613530363730393936393231396633303933343131 Oct 2 19:09:00.146682 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:09:00.169346 env[1113]: time="2023-10-02T19:09:00.169233004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545f75f4b-g2fhn,Uid:7b5e0374-3867-4271-8e1f-18cd2d2377f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\"" Oct 2 19:09:00.236770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid595cff26a5: link becomes ready Oct 2 19:09:00.247534 systemd-networkd[1020]: calid595cff26a5: Link UP Oct 2 19:09:00.247540 systemd-networkd[1020]: calid595cff26a5: Gained carrier Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.741 [INFO][3547] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0 calico-apiserver-545f75f4b- calico-apiserver 923160b0-11c2-47d7-a5f7-1797d0326d64 1097 0 2023-10-02 19:08:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545f75f4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.46 calico-apiserver-545f75f4b-lfpx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid595cff26a5 [] []}} ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.741 [INFO][3547] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.799 [INFO][3600] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.839 [INFO][3600] ipam_plugin.go 269: Auto assigning IP ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.46", "pod":"calico-apiserver-545f75f4b-lfpx8", "timestamp":"2023-10-02 19:08:59.799150331 +0000 UTC"}, Hostname:"10.0.0.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:09:00.651901 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:00.651901 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.839 [INFO][3600] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.46' Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.841 [INFO][3600] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.851 [INFO][3600] ipam.go 372: Looking up existing affinities for host host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.854 [INFO][3600] ipam.go 489: Trying affinity for 192.168.106.128/26 host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.856 [INFO][3600] ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.858 [INFO][3600] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.858 [INFO][3600] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.861 [INFO][3600] ipam.go 1682: Creating new handle: k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50 Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:08:59.864 [INFO][3600] ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:09:00.188 [INFO][3600] ipam.go 1216: Successfully claimed IPs: [192.168.106.135/26] block=192.168.106.128/26 handle="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:09:00.188 [INFO][3600] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.135/26] handle="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" host="10.0.0.46" Oct 2 19:09:00.651901 env[1113]: time="2023-10-02T19:09:00Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:00.651901 env[1113]: 2023-10-02 19:09:00.188 [INFO][3600] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.106.135/26] IPv6=[] ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.190 [INFO][3547] k8s.go 383: Populated endpoint ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0", GenerateName:"calico-apiserver-545f75f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"923160b0-11c2-47d7-a5f7-1797d0326d64", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545f75f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"", Pod:"calico-apiserver-545f75f4b-lfpx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid595cff26a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.190 [INFO][3547] k8s.go 384: Calico CNI using IPs: [192.168.106.135/32] ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.190 [INFO][3547] dataplane_linux.go 68: Setting the host side veth name to calid595cff26a5 ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.237 [INFO][3547] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.246 [INFO][3547] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0", GenerateName:"calico-apiserver-545f75f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"923160b0-11c2-47d7-a5f7-1797d0326d64", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545f75f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50", Pod:"calico-apiserver-545f75f4b-lfpx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid595cff26a5", MAC:"7e:77:7e:12:c0:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:00.652618 env[1113]: 2023-10-02 19:09:00.650 [INFO][3547] k8s.go 489: Wrote updated endpoint to datastore ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Namespace="calico-apiserver" Pod="calico-apiserver-545f75f4b-lfpx8" WorkloadEndpoint="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:00.664000 audit[3718]: NETFILTER_CFG table=filter:81 family=2 entries=60 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:00.664000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=28536 a0=3 a1=7ffc88d147b0 a2=0 a3=7ffc88d1479c items=0 ppid=2534 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:00.664000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.839 [WARNING][3592] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"486964bf-aef1-40b3-8363-5586f9f415ec", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f", Pod:"nginx-deployment-6d5f899847-54ds6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali975debe7355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.839 [INFO][3592] k8s.go 576: Cleaning up netns ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.839 [INFO][3592] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" iface="eth0" netns="" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.839 [INFO][3592] k8s.go 583: Releasing IP address(es) ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.839 [INFO][3592] utils.go 196: Calico CNI releasing IP address ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:08:59.872 [INFO][3629] ipam_plugin.go 416: Releasing address using handleID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:00.670367 env[1113]: time="2023-10-02T19:08:59Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:00.670367 env[1113]: time="2023-10-02T19:09:00Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:09:00.650 [WARNING][3629] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:09:00.650 [INFO][3629] ipam_plugin.go 444: Releasing address using workloadID ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" HandleID="k8s-pod-network.610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:00.670367 env[1113]: time="2023-10-02T19:09:00Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:00.670367 env[1113]: 2023-10-02 19:09:00.669 [INFO][3592] k8s.go 589: Teardown processing complete. ContainerID="610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb" Oct 2 19:09:00.670367 env[1113]: time="2023-10-02T19:09:00.670345643Z" level=info msg="TearDown network for sandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" successfully" Oct 2 19:09:00.912496 kubelet[1417]: E1002 19:09:00.912338 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:01.043699 env[1113]: time="2023-10-02T19:09:01.043612448Z" level=info msg="RemovePodSandbox \"610b667fcccc1d426069ee73321808c61f5b3c8ebe28c5e37f5c6d54ad702bbb\" returns successfully" Oct 2 19:09:01.044593 env[1113]: time="2023-10-02T19:09:01.044548499Z" level=info msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" Oct 2 19:09:01.071270 env[1113]: time="2023-10-02T19:09:01.071165828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:09:01.071270 env[1113]: time="2023-10-02T19:09:01.071225180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:09:01.071270 env[1113]: time="2023-10-02T19:09:01.071239106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:09:01.071558 env[1113]: time="2023-10-02T19:09:01.071453961Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50 pid=3748 runtime=io.containerd.runc.v2 Oct 2 19:09:01.096522 systemd[1]: Started cri-containerd-5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50.scope. Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit: BPF prog-id=128 op=LOAD Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=3748 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:01.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565316162303234636461663966656565646336303364653366366137 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=3748 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:01.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565316162303234636461663966656565646336303364653366366137 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit: BPF prog-id=129 op=LOAD Oct 2 19:09:01.108000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000024830 items=0 ppid=3748 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:01.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565316162303234636461663966656565646336303364653366366137 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.108000 audit: BPF prog-id=130 op=LOAD Oct 2 19:09:01.108000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000024878 items=0 ppid=3748 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:01.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565316162303234636461663966656565646336303364653366366137 Oct 2 19:09:01.109000 audit: BPF prog-id=130 op=UNLOAD Oct 2 19:09:01.109000 audit: BPF prog-id=129 op=UNLOAD Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { perfmon } for pid=3759 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit[3759]: AVC avc: denied { bpf } for pid=3759 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:01.109000 audit: BPF prog-id=131 op=LOAD Oct 2 19:09:01.109000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c000024c88 items=0 ppid=3748 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:01.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565316162303234636461663966656565646336303364653366366137 Oct 2 19:09:01.111147 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:09:01.141993 env[1113]: time="2023-10-02T19:09:01.141921927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545f75f4b-lfpx8,Uid:923160b0-11c2-47d7-a5f7-1797d0326d64,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\"" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.089 [WARNING][3735] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-csi--node--driver--2ckzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20101097-40e7-4d0a-a992-23f4379dc0f4", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed", Pod:"csi-node-driver-2ckzv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali89c9056dde6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.089 [INFO][3735] k8s.go 576: Cleaning up netns ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.089 [INFO][3735] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" iface="eth0" netns="" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.090 [INFO][3735] k8s.go 583: Releasing IP address(es) ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.090 [INFO][3735] utils.go 196: Calico CNI releasing IP address ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.133 [INFO][3769] ipam_plugin.go 416: Releasing address using handleID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.147409 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.147409 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.141 [WARNING][3769] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.141 [INFO][3769] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.147409 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.147409 env[1113]: 2023-10-02 19:09:01.146 [INFO][3735] k8s.go 589: Teardown processing complete. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.147872 env[1113]: time="2023-10-02T19:09:01.147434607Z" level=info msg="TearDown network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" successfully" Oct 2 19:09:01.147872 env[1113]: time="2023-10-02T19:09:01.147464042Z" level=info msg="StopPodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" returns successfully" Oct 2 19:09:01.147953 env[1113]: time="2023-10-02T19:09:01.147922835Z" level=info msg="RemovePodSandbox for \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" Oct 2 19:09:01.147997 env[1113]: time="2023-10-02T19:09:01.147952932Z" level=info msg="Forcibly stopping sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\"" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.188 [WARNING][3806] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-csi--node--driver--2ckzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20101097-40e7-4d0a-a992-23f4379dc0f4", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"8618ad5f2dcf0284643713c9851d5502e6bd68e4fedd6211de280507755ba1ed", Pod:"csi-node-driver-2ckzv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali89c9056dde6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.188 [INFO][3806] k8s.go 576: Cleaning up netns ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.188 [INFO][3806] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" iface="eth0" netns="" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.188 [INFO][3806] k8s.go 583: Releasing IP address(es) ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.188 [INFO][3806] utils.go 196: Calico CNI releasing IP address ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.212 [INFO][3814] ipam_plugin.go 416: Releasing address using handleID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.225118 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.225118 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.220 [WARNING][3814] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.220 [INFO][3814] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" HandleID="k8s-pod-network.0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Workload="10.0.0.46-k8s-csi--node--driver--2ckzv-eth0" Oct 2 19:09:01.225118 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.225118 env[1113]: 2023-10-02 19:09:01.223 [INFO][3806] k8s.go 589: Teardown processing complete. ContainerID="0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d" Oct 2 19:09:01.225118 env[1113]: time="2023-10-02T19:09:01.225068684Z" level=info msg="TearDown network for sandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" successfully" Oct 2 19:09:01.231919 env[1113]: time="2023-10-02T19:09:01.231859207Z" level=info msg="RemovePodSandbox \"0c94e7b11b1b1c4b30d7c834c47ca606c8c02832d3646ef67eb145ed1385338d\" returns successfully" Oct 2 19:09:01.232529 env[1113]: time="2023-10-02T19:09:01.232500153Z" level=info msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.272 [WARNING][3835] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"c37eda03-464c-4d96-9ada-29c3d253b3a0", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24", Pod:"calico-kube-controllers-74b9887bb6-bt4ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidad1d0801c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.272 [INFO][3835] k8s.go 576: Cleaning up netns ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.272 [INFO][3835] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" iface="eth0" netns="" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.272 [INFO][3835] k8s.go 583: Releasing IP address(es) ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.272 [INFO][3835] utils.go 196: Calico CNI releasing IP address ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.298 [INFO][3842] ipam_plugin.go 416: Releasing address using handleID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.311390 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.311390 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.306 [WARNING][3842] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.306 [INFO][3842] ipam_plugin.go 444: Releasing address using workloadID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.311390 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.311390 env[1113]: 2023-10-02 19:09:01.309 [INFO][3835] k8s.go 589: Teardown processing complete. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.312098 env[1113]: time="2023-10-02T19:09:01.311978448Z" level=info msg="TearDown network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" successfully" Oct 2 19:09:01.312098 env[1113]: time="2023-10-02T19:09:01.312057908Z" level=info msg="StopPodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" returns successfully" Oct 2 19:09:01.313295 env[1113]: time="2023-10-02T19:09:01.312865447Z" level=info msg="RemovePodSandbox for \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" Oct 2 19:09:01.313295 env[1113]: time="2023-10-02T19:09:01.312901645Z" level=info msg="Forcibly stopping sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\"" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.347 [WARNING][3866] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"c37eda03-464c-4d96-9ada-29c3d253b3a0", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"12a8d741fc3cf02b5e4a36d2a652d67280950c73db6a708b38b835f58f207a24", Pod:"calico-kube-controllers-74b9887bb6-bt4ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidad1d0801c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.348 [INFO][3866] k8s.go 576: Cleaning up netns ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.348 [INFO][3866] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" iface="eth0" netns="" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.348 [INFO][3866] k8s.go 583: Releasing IP address(es) ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.348 [INFO][3866] utils.go 196: Calico CNI releasing IP address ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.374 [INFO][3873] ipam_plugin.go 416: Releasing address using handleID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.389896 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.389896 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.383 [WARNING][3873] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.384 [INFO][3873] ipam_plugin.go 444: Releasing address using workloadID ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" HandleID="k8s-pod-network.b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Workload="10.0.0.46-k8s-calico--kube--controllers--74b9887bb6--bt4ql-eth0" Oct 2 19:09:01.389896 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.389896 env[1113]: 2023-10-02 19:09:01.388 [INFO][3866] k8s.go 589: Teardown processing complete. ContainerID="b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a" Oct 2 19:09:01.390359 env[1113]: time="2023-10-02T19:09:01.389957704Z" level=info msg="TearDown network for sandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" successfully" Oct 2 19:09:01.393593 env[1113]: time="2023-10-02T19:09:01.393543991Z" level=info msg="RemovePodSandbox \"b9e02a1ac57b4073ee8aa1b54d58484be3d0b00c07bdab87e00fd4d22dbb884a\" returns successfully" Oct 2 19:09:01.394178 env[1113]: time="2023-10-02T19:09:01.394148207Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.437 [WARNING][3897] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"059f120d-41c2-40ee-916b-51ed03391c22", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591", Pod:"coredns-5dd5756b68-9jw66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali627fa8ea144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.438 [INFO][3897] k8s.go 576: Cleaning up netns ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.438 [INFO][3897] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" iface="eth0" netns="" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.438 [INFO][3897] k8s.go 583: Releasing IP address(es) ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.438 [INFO][3897] utils.go 196: Calico CNI releasing IP address ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.455 [INFO][3905] ipam_plugin.go 416: Releasing address using handleID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.467726 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.467726 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.463 [WARNING][3905] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.463 [INFO][3905] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.467726 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.467726 env[1113]: 2023-10-02 19:09:01.466 [INFO][3897] k8s.go 589: Teardown processing complete. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.468274 env[1113]: time="2023-10-02T19:09:01.467776308Z" level=info msg="TearDown network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" successfully" Oct 2 19:09:01.468274 env[1113]: time="2023-10-02T19:09:01.467821363Z" level=info msg="StopPodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" returns successfully" Oct 2 19:09:01.468515 env[1113]: time="2023-10-02T19:09:01.468470674Z" level=info msg="RemovePodSandbox for \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:09:01.468567 env[1113]: time="2023-10-02T19:09:01.468516460Z" level=info msg="Forcibly stopping sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\"" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.507 [WARNING][3928] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"059f120d-41c2-40ee-916b-51ed03391c22", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.46", ContainerID:"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591", Pod:"coredns-5dd5756b68-9jw66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali627fa8ea144", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.507 [INFO][3928] k8s.go 576: Cleaning up netns ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.507 [INFO][3928] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" iface="eth0" netns="" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.507 [INFO][3928] k8s.go 583: Releasing IP address(es) ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.508 [INFO][3928] utils.go 196: Calico CNI releasing IP address ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.533 [INFO][3935] ipam_plugin.go 416: Releasing address using handleID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.546228 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:01.546228 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.541 [WARNING][3935] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.541 [INFO][3935] ipam_plugin.go 444: Releasing address using workloadID ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" HandleID="k8s-pod-network.0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Workload="10.0.0.46-k8s-coredns--5dd5756b68--9jw66-eth0" Oct 2 19:09:01.546228 env[1113]: time="2023-10-02T19:09:01Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:01.546228 env[1113]: 2023-10-02 19:09:01.545 [INFO][3928] k8s.go 589: Teardown processing complete. ContainerID="0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330" Oct 2 19:09:01.548376 env[1113]: time="2023-10-02T19:09:01.546260745Z" level=info msg="TearDown network for sandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" successfully" Oct 2 19:09:01.551044 env[1113]: time="2023-10-02T19:09:01.551000851Z" level=info msg="RemovePodSandbox \"0542d4817b1beeda04466a31c79350b7574713034b15a1bdf9b9436534b68330\" returns successfully" Oct 2 19:09:01.707081 systemd-networkd[1020]: calibf780e7f14b: Gained IPv6LL Oct 2 19:09:01.913386 kubelet[1417]: E1002 19:09:01.913321 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:02.088834 systemd-networkd[1020]: calid595cff26a5: Gained IPv6LL Oct 2 19:09:02.309792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845776195.mount: Deactivated successfully. Oct 2 19:09:02.914187 kubelet[1417]: E1002 19:09:02.914116 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:03.566329 env[1113]: time="2023-10-02T19:09:03.566259142Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.568520 env[1113]: time="2023-10-02T19:09:03.568493474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:22c2ef579d5668dbfa645a84c3a2e988885c114561e9a560a97b2d0ea6d6c988,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.570608 env[1113]: time="2023-10-02T19:09:03.570571181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.572392 env[1113]: time="2023-10-02T19:09:03.572356107Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:637f6b877b0a51c456b44ec74046864b5131a87cb1c4536f11170201073027cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.573174 env[1113]: time="2023-10-02T19:09:03.573134109Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22c2ef579d5668dbfa645a84c3a2e988885c114561e9a560a97b2d0ea6d6c988\"" Oct 2 19:09:03.573964 env[1113]: time="2023-10-02T19:09:03.573936439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Oct 2 19:09:03.575104 env[1113]: time="2023-10-02T19:09:03.575073346Z" level=info msg="CreateContainer within sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Oct 2 19:09:03.587451 env[1113]: time="2023-10-02T19:09:03.587402279Z" level=info msg="CreateContainer within sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\"" Oct 2 19:09:03.588049 env[1113]: time="2023-10-02T19:09:03.588012538Z" level=info msg="StartContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\"" Oct 2 19:09:03.607259 systemd[1]: Started cri-containerd-4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f.scope. Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit: BPF prog-id=132 op=LOAD Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=3124 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306537323736353962643664303237373537626231363338346662 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=3124 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306537323736353962643664303237373537626231363338346662 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.618000 audit: BPF prog-id=133 op=LOAD Oct 2 19:09:03.618000 audit[3953]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c0000253d0 items=0 ppid=3124 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306537323736353962643664303237373537626231363338346662 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit: BPF prog-id=134 op=LOAD Oct 2 19:09:03.619000 audit[3953]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000025418 items=0 ppid=3124 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306537323736353962643664303237373537626231363338346662 Oct 2 19:09:03.619000 audit: BPF prog-id=134 op=UNLOAD Oct 2 19:09:03.619000 audit: BPF prog-id=133 op=UNLOAD Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { perfmon } for pid=3953 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit[3953]: AVC avc: denied { bpf } for pid=3953 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.619000 audit: BPF prog-id=135 op=LOAD Oct 2 19:09:03.619000 audit[3953]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000025828 items=0 ppid=3124 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306537323736353962643664303237373537626231363338346662 Oct 2 19:09:03.637367 env[1113]: time="2023-10-02T19:09:03.637307671Z" level=info msg="StartContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" returns successfully" Oct 2 19:09:03.742308 env[1113]: time="2023-10-02T19:09:03.742241088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.744278 env[1113]: time="2023-10-02T19:09:03.744230659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.745852 env[1113]: time="2023-10-02T19:09:03.745805751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.747381 env[1113]: time="2023-10-02T19:09:03.747351017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:09:03.747924 env[1113]: time="2023-10-02T19:09:03.747891934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Oct 2 19:09:03.748581 env[1113]: time="2023-10-02T19:09:03.748544922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:09:03.749783 env[1113]: time="2023-10-02T19:09:03.749729088Z" level=info msg="CreateContainer within sandbox \"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 2 19:09:03.764619 env[1113]: time="2023-10-02T19:09:03.764543125Z" level=info msg="CreateContainer within sandbox \"4b70993df283387ff8a54ca67d1a5ff245a0984c52765195cec22800c8cee591\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b26c583f3e5afc9d95eb3f8f1634b14f609c298e987e0f8aa562a44fd7a0d16\"" Oct 2 19:09:03.765305 env[1113]: time="2023-10-02T19:09:03.765256215Z" level=info msg="StartContainer for \"0b26c583f3e5afc9d95eb3f8f1634b14f609c298e987e0f8aa562a44fd7a0d16\"" Oct 2 19:09:03.781229 systemd[1]: Started cri-containerd-0b26c583f3e5afc9d95eb3f8f1634b14f609c298e987e0f8aa562a44fd7a0d16.scope. Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.792000 audit: BPF prog-id=136 op=LOAD Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3334 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323663353833663365356166633964393565623366386631363334 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=3334 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323663353833663365356166633964393565623366386631363334 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit: BPF prog-id=137 op=LOAD Oct 2 19:09:03.793000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000324bd0 items=0 ppid=3334 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323663353833663365356166633964393565623366386631363334 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit: BPF prog-id=138 op=LOAD Oct 2 19:09:03.793000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000324c18 items=0 ppid=3334 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323663353833663365356166633964393565623366386631363334 Oct 2 19:09:03.793000 audit: BPF prog-id=138 op=UNLOAD Oct 2 19:09:03.793000 audit: BPF prog-id=137 op=UNLOAD Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { perfmon } for pid=4010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit[4010]: AVC avc: denied { bpf } for pid=4010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:09:03.793000 audit: BPF prog-id=139 op=LOAD Oct 2 19:09:03.793000 audit[4010]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000325028 items=0 ppid=3334 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:03.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323663353833663365356166633964393565623366386631363334 Oct 2 19:09:03.808864 env[1113]: time="2023-10-02T19:09:03.808761944Z" level=info msg="StartContainer for \"0b26c583f3e5afc9d95eb3f8f1634b14f609c298e987e0f8aa562a44fd7a0d16\" returns successfully" Oct 2 19:09:03.914819 kubelet[1417]: E1002 19:09:03.914709 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:04.010317 env[1113]: time="2023-10-02T19:09:04.010203070Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:04.011354 env[1113]: time="2023-10-02T19:09:04.011322464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:04.011680 kubelet[1417]: E1002 19:09:04.011649 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:04.011784 kubelet[1417]: E1002 19:09:04.011763 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:04.012044 kubelet[1417]: E1002 19:09:04.012011 1417 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:04.012163 kubelet[1417]: E1002 19:09:04.012119 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:04.012339 env[1113]: time="2023-10-02T19:09:04.012292218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.25.0\"" Oct 2 19:09:04.253038 env[1113]: time="2023-10-02T19:09:04.252879581Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:04.293208 env[1113]: time="2023-10-02T19:09:04.293126198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:04.293506 kubelet[1417]: E1002 19:09:04.293477 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/apiserver:v3.25.0" Oct 2 19:09:04.293569 kubelet[1417]: E1002 19:09:04.293526 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/apiserver:v3.25.0" Oct 2 19:09:04.293954 kubelet[1417]: E1002 19:09:04.293678 1417 kuberuntime_manager.go:1209] container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.25.0,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-q62c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/version,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:90,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/code/filecheck],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545f75f4b-g2fhn_calico-apiserver(7b5e0374-3867-4271-8e1f-18cd2d2377f8): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/apiserver:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/apiserver:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:04.293954 kubelet[1417]: E1002 19:09:04.293753 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-apiserver/calico-apiserver-545f75f4b-g2fhn" podUID="7b5e0374-3867-4271-8e1f-18cd2d2377f8" Oct 2 19:09:04.294159 env[1113]: time="2023-10-02T19:09:04.294005621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.25.0\"" Oct 2 19:09:04.302263 kubelet[1417]: E1002 19:09:04.302239 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:09:04.302694 kubelet[1417]: E1002 19:09:04.302668 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\"\"" pod="calico-apiserver/calico-apiserver-545f75f4b-g2fhn" podUID="7b5e0374-3867-4271-8e1f-18cd2d2377f8" Oct 2 19:09:04.303098 kubelet[1417]: E1002 19:09:04.303071 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:04.428841 kubelet[1417]: I1002 19:09:04.428793 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-54ds6" podStartSLOduration=15.42322348 podCreationTimestamp="2023-10-02 19:08:33 +0000 UTC" firstStartedPulling="2023-10-02 19:08:47.568217284 +0000 UTC m=+49.070255227" lastFinishedPulling="2023-10-02 19:09:03.573663295 +0000 UTC m=+65.075701248" observedRunningTime="2023-10-02 19:09:04.427220296 +0000 UTC m=+65.929258259" watchObservedRunningTime="2023-10-02 19:09:04.428669501 +0000 UTC m=+65.930707444" Oct 2 19:09:04.462000 audit[4043]: NETFILTER_CFG table=filter:82 family=2 entries=10 op=nft_register_rule pid=4043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:04.462000 audit[4043]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffce9949cb0 a2=0 a3=7ffce9949c9c items=0 ppid=1618 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:04.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:04.467586 kubelet[1417]: I1002 19:09:04.467550 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9jw66" podStartSLOduration=78.923415544 podCreationTimestamp="2023-10-02 19:07:36 +0000 UTC" firstStartedPulling="2023-10-02 19:08:54.20414075 +0000 UTC m=+55.706178693" lastFinishedPulling="2023-10-02 19:09:03.748233466 +0000 UTC m=+65.250271409" observedRunningTime="2023-10-02 19:09:04.466966201 +0000 UTC m=+65.969004175" watchObservedRunningTime="2023-10-02 19:09:04.46750826 +0000 UTC m=+65.969546203" Oct 2 19:09:04.463000 audit[4043]: NETFILTER_CFG table=nat:83 family=2 entries=20 op=nft_register_rule pid=4043 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:04.463000 audit[4043]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffce9949cb0 a2=0 a3=7ffce9949c9c items=0 ppid=1618 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:04.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:04.478000 audit[4045]: NETFILTER_CFG table=filter:84 family=2 entries=10 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:04.478000 audit[4045]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffdaf9bbc60 a2=0 a3=7ffdaf9bbc4c items=0 ppid=1618 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:04.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:04.480000 audit[4045]: NETFILTER_CFG table=nat:85 family=2 entries=44 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:04.480000 audit[4045]: SYSCALL arch=c000003e syscall=46 success=yes exit=13788 a0=3 a1=7ffdaf9bbc60 a2=0 a3=7ffdaf9bbc4c items=0 ppid=1618 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:04.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:04.627187 env[1113]: time="2023-10-02T19:09:04.627105476Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:04.628537 env[1113]: time="2023-10-02T19:09:04.628488536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:04.628862 kubelet[1417]: E1002 19:09:04.628819 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/apiserver:v3.25.0" Oct 2 19:09:04.628966 kubelet[1417]: E1002 19:09:04.628879 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/apiserver:v3.25.0" Oct 2 19:09:04.629056 kubelet[1417]: E1002 19:09:04.629013 1417 kuberuntime_manager.go:1209] container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.25.0,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g85nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/version,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:90,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/code/filecheck],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545f75f4b-lfpx8_calico-apiserver(923160b0-11c2-47d7-a5f7-1797d0326d64): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/apiserver:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/apiserver:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:04.629197 kubelet[1417]: E1002 19:09:04.629079 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-apiserver/calico-apiserver-545f75f4b-lfpx8" podUID="923160b0-11c2-47d7-a5f7-1797d0326d64" Oct 2 19:09:04.916210 kubelet[1417]: E1002 19:09:04.916023 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:05.307478 kubelet[1417]: E1002 19:09:05.307347 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:09:05.307855 kubelet[1417]: E1002 19:09:05.307835 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.25.0\\\"\"" pod="calico-apiserver/calico-apiserver-545f75f4b-lfpx8" podUID="923160b0-11c2-47d7-a5f7-1797d0326d64" Oct 2 19:09:05.917215 kubelet[1417]: E1002 19:09:05.917149 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:06.016000 audit[4049]: NETFILTER_CFG table=filter:86 family=2 entries=10 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:06.025998 kernel: kauditd_printk_skb: 236 callbacks suppressed Oct 2 19:09:06.026151 kernel: audit: type=1325 audit(1696273746.016:940): table=filter:86 family=2 entries=10 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:06.016000 audit[4049]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffcb98649b0 a2=0 a3=7ffcb986499c items=0 ppid=1618 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:06.030536 kernel: audit: type=1300 audit(1696273746.016:940): arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffcb98649b0 a2=0 a3=7ffcb986499c items=0 ppid=1618 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:06.030569 kernel: audit: type=1327 audit(1696273746.016:940): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:06.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:06.017000 audit[4049]: NETFILTER_CFG table=nat:87 family=2 entries=20 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:06.017000 audit[4049]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffcb98649b0 a2=0 a3=7ffcb986499c items=0 ppid=1618 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:06.038514 kernel: audit: type=1325 audit(1696273746.017:941): table=nat:87 family=2 entries=20 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:06.038584 kernel: audit: type=1300 audit(1696273746.017:941): arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffcb98649b0 a2=0 a3=7ffcb986499c items=0 ppid=1618 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:06.038604 kernel: audit: type=1327 audit(1696273746.017:941): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:06.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:06.309075 kubelet[1417]: E1002 19:09:06.308954 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:09:06.918149 kubelet[1417]: E1002 19:09:06.918071 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:07.044000 audit[4051]: NETFILTER_CFG table=filter:88 family=2 entries=10 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:07.044000 audit[4051]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffd6c389e90 a2=0 a3=7ffd6c389e7c items=0 ppid=1618 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:07.050627 kernel: audit: type=1325 audit(1696273747.044:942): table=filter:88 family=2 entries=10 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:07.050711 kernel: audit: type=1300 audit(1696273747.044:942): arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffd6c389e90 a2=0 a3=7ffd6c389e7c items=0 ppid=1618 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:07.050757 kernel: audit: type=1327 audit(1696273747.044:942): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:07.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:07.052000 audit[4051]: NETFILTER_CFG table=nat:89 family=2 entries=56 op=nft_register_chain pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:07.052000 audit[4051]: SYSCALL arch=c000003e syscall=46 success=yes exit=19452 a0=3 a1=7ffd6c389e90 a2=0 a3=7ffd6c389e7c items=0 ppid=1618 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:07.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:07.060763 kernel: audit: type=1325 audit(1696273747.052:943): table=nat:89 family=2 entries=56 op=nft_register_chain pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:07.311174 kubelet[1417]: E1002 19:09:07.311047 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:09:07.918314 kubelet[1417]: E1002 19:09:07.918245 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:08.919442 kubelet[1417]: E1002 19:09:08.919374 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:09.194926 kubelet[1417]: I1002 19:09:09.194804 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:09.194926 kubelet[1417]: I1002 19:09:09.194858 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:09.197201 kubelet[1417]: I1002 19:09:09.197161 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:09.212409 kubelet[1417]: I1002 19:09:09.212356 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:09.212597 kubelet[1417]: I1002 19:09:09.212522 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-545f75f4b-g2fhn","calico-apiserver/calico-apiserver-545f75f4b-lfpx8","calico-system/csi-node-driver-2ckzv","default/nginx-deployment-6d5f899847-54ds6","tigera-operator/tigera-operator-8547bd6cc6-d8wl8","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:09.213132 env[1113]: time="2023-10-02T19:09:09.213077764Z" level=info msg="StopPodSandbox for \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\"" Oct 2 19:09:09.215279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a-shm.mount: Deactivated successfully. Oct 2 19:09:09.227564 systemd[1]: run-containerd-runc-k8s.io-7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3-runc.8l9MHq.mount: Deactivated successfully. Oct 2 19:09:09.228297 systemd[1]: cri-containerd-72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a.scope: Deactivated successfully. Oct 2 19:09:09.227000 audit: BPF prog-id=124 op=UNLOAD Oct 2 19:09:09.233000 audit: BPF prog-id=127 op=UNLOAD Oct 2 19:09:09.253056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a-rootfs.mount: Deactivated successfully. Oct 2 19:09:09.421632 env[1113]: time="2023-10-02T19:09:09.421570907Z" level=info msg="shim disconnected" id=72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a Oct 2 19:09:09.421632 env[1113]: time="2023-10-02T19:09:09.421632883Z" level=warning msg="cleaning up after shim disconnected" id=72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a namespace=k8s.io Oct 2 19:09:09.421928 env[1113]: time="2023-10-02T19:09:09.421650046Z" level=info msg="cleaning up dead shim" Oct 2 19:09:09.428579 env[1113]: time="2023-10-02T19:09:09.428535301Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" Oct 2 19:09:09.470298 systemd-networkd[1020]: calibf780e7f14b: Link DOWN Oct 2 19:09:09.470305 systemd-networkd[1020]: calibf780e7f14b: Lost carrier Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.468 [INFO][4150] k8s.go 576: Cleaning up netns ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.469 [INFO][4150] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" iface="eth0" netns="/var/run/netns/cni-945e6f08-cf57-8e60-6dfa-17626842b359" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.469 [INFO][4150] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" iface="eth0" netns="/var/run/netns/cni-945e6f08-cf57-8e60-6dfa-17626842b359" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.491 [INFO][4150] dataplane_linux.go 569: Deleted device in netns. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" after=21.864939ms iface="eth0" netns="/var/run/netns/cni-945e6f08-cf57-8e60-6dfa-17626842b359" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.491 [INFO][4150] k8s.go 583: Releasing IP address(es) ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.491 [INFO][4150] utils.go 196: Calico CNI releasing IP address ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.510 [INFO][4158] ipam_plugin.go 416: Releasing address using handleID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:09.554532 env[1113]: time="2023-10-02T19:09:09Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:09.554532 env[1113]: time="2023-10-02T19:09:09Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.550 [INFO][4158] ipam_plugin.go 435: Released address using handleID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.550 [INFO][4158] ipam_plugin.go 444: Releasing address using workloadID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:09.554532 env[1113]: time="2023-10-02T19:09:09Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:09.554532 env[1113]: 2023-10-02 19:09:09.553 [INFO][4150] k8s.go 589: Teardown processing complete. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:09.555151 env[1113]: time="2023-10-02T19:09:09.554868654Z" level=info msg="TearDown network for sandbox \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" successfully" Oct 2 19:09:09.555151 env[1113]: time="2023-10-02T19:09:09.554910362Z" level=info msg="StopPodSandbox for \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" returns successfully" Oct 2 19:09:09.559371 kubelet[1417]: I1002 19:09:09.559333 1417 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-545f75f4b-g2fhn" Oct 2 19:09:09.559371 kubelet[1417]: I1002 19:09:09.559367 1417 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-545f75f4b-g2fhn"] Oct 2 19:09:09.578814 kubelet[1417]: I1002 19:09:09.578504 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-xg9lh" nodeCondition=["DiskPressure"] Oct 2 19:09:09.580000 audit[4171]: NETFILTER_CFG table=filter:90 family=2 entries=10 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:09.580000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffc69f7d620 a2=0 a3=7ffc69f7d60c items=0 ppid=1618 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:09.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:09.581000 audit[4171]: NETFILTER_CFG table=nat:91 family=2 entries=20 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:09.581000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffc69f7d620 a2=0 a3=7ffc69f7d60c items=0 ppid=1618 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:09.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:09.589000 audit[4172]: NETFILTER_CFG table=filter:92 family=2 entries=48 op=nft_register_rule pid=4172 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:09.589000 audit[4172]: SYSCALL arch=c000003e syscall=46 success=yes exit=7788 a0=3 a1=7ffcadc0e400 a2=0 a3=7ffcadc0e3ec items=0 ppid=2534 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:09.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:09.589000 audit[4172]: NETFILTER_CFG table=filter:93 family=2 entries=2 op=nft_unregister_chain pid=4172 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:09.589000 audit[4172]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcadc0e400 a2=0 a3=55f5ca702000 items=0 ppid=2534 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:09.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:09.621426 kubelet[1417]: I1002 19:09:09.621364 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-q8qxt" nodeCondition=["DiskPressure"] Oct 2 19:09:09.671084 kubelet[1417]: I1002 19:09:09.671034 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-vzpfp" nodeCondition=["DiskPressure"] Oct 2 19:09:09.693935 kubelet[1417]: I1002 19:09:09.693852 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-545gx" nodeCondition=["DiskPressure"] Oct 2 19:09:09.711990 kubelet[1417]: I1002 19:09:09.711949 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62c5\" (UniqueName: \"kubernetes.io/projected/7b5e0374-3867-4271-8e1f-18cd2d2377f8-kube-api-access-q62c5\") pod \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\" (UID: \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\") " Oct 2 19:09:09.712299 kubelet[1417]: I1002 19:09:09.712283 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs\") pod \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\" (UID: \"7b5e0374-3867-4271-8e1f-18cd2d2377f8\") " Oct 2 19:09:09.712875 kubelet[1417]: I1002 19:09:09.712758 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-kmm5q" nodeCondition=["DiskPressure"] Oct 2 19:09:09.715545 kubelet[1417]: I1002 19:09:09.715497 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5e0374-3867-4271-8e1f-18cd2d2377f8-kube-api-access-q62c5" (OuterVolumeSpecName: "kube-api-access-q62c5") pod "7b5e0374-3867-4271-8e1f-18cd2d2377f8" (UID: "7b5e0374-3867-4271-8e1f-18cd2d2377f8"). InnerVolumeSpecName "kube-api-access-q62c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:09:09.715912 kubelet[1417]: I1002 19:09:09.715769 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7b5e0374-3867-4271-8e1f-18cd2d2377f8" (UID: "7b5e0374-3867-4271-8e1f-18cd2d2377f8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:09:09.734059 kubelet[1417]: I1002 19:09:09.733908 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-rnkrj" nodeCondition=["DiskPressure"] Oct 2 19:09:09.757370 kubelet[1417]: I1002 19:09:09.757326 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-lkhdn" nodeCondition=["DiskPressure"] Oct 2 19:09:09.813151 kubelet[1417]: I1002 19:09:09.813083 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q62c5\" (UniqueName: \"kubernetes.io/projected/7b5e0374-3867-4271-8e1f-18cd2d2377f8-kube-api-access-q62c5\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:09.813151 kubelet[1417]: I1002 19:09:09.813117 1417 reconciler_common.go:300] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b5e0374-3867-4271-8e1f-18cd2d2377f8-calico-apiserver-certs\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:09.829807 kubelet[1417]: I1002 19:09:09.829513 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-vplsl" nodeCondition=["DiskPressure"] Oct 2 19:09:09.920225 kubelet[1417]: E1002 19:09:09.920178 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:09.979053 kubelet[1417]: I1002 19:09:09.979009 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-68nqf" nodeCondition=["DiskPressure"] Oct 2 19:09:10.215580 systemd[1]: run-netns-cni\x2d945e6f08\x2dcf57\x2d8e60\x2d6dfa\x2d17626842b359.mount: Deactivated successfully. Oct 2 19:09:10.215767 systemd[1]: var-lib-kubelet-pods-7b5e0374\x2d3867\x2d4271\x2d8e1f\x2d18cd2d2377f8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 2 19:09:10.215865 systemd[1]: var-lib-kubelet-pods-7b5e0374\x2d3867\x2d4271\x2d8e1f\x2d18cd2d2377f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq62c5.mount: Deactivated successfully. Oct 2 19:09:10.231365 kubelet[1417]: I1002 19:09:10.231007 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-hlf4p" nodeCondition=["DiskPressure"] Oct 2 19:09:10.320464 systemd[1]: Removed slice kubepods-besteffort-pod7b5e0374_3867_4271_8e1f_18cd2d2377f8.slice. Oct 2 19:09:10.380931 kubelet[1417]: I1002 19:09:10.380891 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-9682f" nodeCondition=["DiskPressure"] Oct 2 19:09:10.529939 kubelet[1417]: I1002 19:09:10.529818 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-5h5f9" nodeCondition=["DiskPressure"] Oct 2 19:09:10.559773 kubelet[1417]: I1002 19:09:10.559692 1417 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-545f75f4b-g2fhn"] Oct 2 19:09:10.573531 kubelet[1417]: I1002 19:09:10.573497 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:10.573531 kubelet[1417]: I1002 19:09:10.573541 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:10.575365 env[1113]: time="2023-10-02T19:09:10.575324972Z" level=info msg="StopPodSandbox for \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\"" Oct 2 19:09:10.905216 kubelet[1417]: I1002 19:09:10.905114 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-x7wxp" nodeCondition=["DiskPressure"] Oct 2 19:09:10.920353 kubelet[1417]: E1002 19:09:10.920301 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:10.922064 kubelet[1417]: I1002 19:09:10.922011 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-fxtpf" nodeCondition=["DiskPressure"] Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.913 [INFO][4213] k8s.go 576: Cleaning up netns ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.914 [INFO][4213] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" iface="eth0" netns="" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.914 [INFO][4213] k8s.go 583: Releasing IP address(es) ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.914 [INFO][4213] utils.go 196: Calico CNI releasing IP address ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.932 [INFO][4222] ipam_plugin.go 416: Releasing address using handleID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:10.946146 env[1113]: time="2023-10-02T19:09:10Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:10.946146 env[1113]: time="2023-10-02T19:09:10Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.941 [WARNING][4222] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.941 [INFO][4222] ipam_plugin.go 444: Releasing address using workloadID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:10.946146 env[1113]: time="2023-10-02T19:09:10Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:10.946146 env[1113]: 2023-10-02 19:09:10.944 [INFO][4213] k8s.go 589: Teardown processing complete. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:10.946650 env[1113]: time="2023-10-02T19:09:10.946179245Z" level=info msg="TearDown network for sandbox \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" successfully" Oct 2 19:09:10.946650 env[1113]: time="2023-10-02T19:09:10.946226634Z" level=info msg="StopPodSandbox for \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" returns successfully" Oct 2 19:09:10.946816 env[1113]: time="2023-10-02T19:09:10.946778911Z" level=info msg="RemovePodSandbox for \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\"" Oct 2 19:09:10.947016 env[1113]: time="2023-10-02T19:09:10.946809510Z" level=info msg="Forcibly stopping sandbox \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\"" Oct 2 19:09:10.979323 kubelet[1417]: I1002 19:09:10.979272 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-ntjh2" nodeCondition=["DiskPressure"] Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:10.993 [INFO][4244] k8s.go 576: Cleaning up netns ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:10.995 [INFO][4244] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" iface="eth0" netns="" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:10.996 [INFO][4244] k8s.go 583: Releasing IP address(es) ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:10.996 [INFO][4244] utils.go 196: Calico CNI releasing IP address ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:11.020 [INFO][4252] ipam_plugin.go 416: Releasing address using handleID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:11.032269 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:11.032269 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:11.027 [WARNING][4252] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:11.028 [INFO][4252] ipam_plugin.go 444: Releasing address using workloadID ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" HandleID="k8s-pod-network.72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--g2fhn-eth0" Oct 2 19:09:11.032269 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:11.032269 env[1113]: 2023-10-02 19:09:11.031 [INFO][4244] k8s.go 589: Teardown processing complete. ContainerID="72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a" Oct 2 19:09:11.032920 env[1113]: time="2023-10-02T19:09:11.032301475Z" level=info msg="TearDown network for sandbox \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" successfully" Oct 2 19:09:11.036475 env[1113]: time="2023-10-02T19:09:11.036433938Z" level=info msg="RemovePodSandbox \"72e985afa506709969219f309341126cb090fc5d608370b313e38e64f246df2a\" returns successfully" Oct 2 19:09:11.037167 kubelet[1417]: I1002 19:09:11.037127 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:11.053004 kubelet[1417]: I1002 19:09:11.052954 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:11.053184 kubelet[1417]: I1002 19:09:11.053070 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-545f75f4b-lfpx8","calico-system/csi-node-driver-2ckzv","default/nginx-deployment-6d5f899847-54ds6","tigera-operator/tigera-operator-8547bd6cc6-d8wl8","kube-system/coredns-5dd5756b68-8glxb","kube-system/coredns-5dd5756b68-9jw66","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:11.053608 env[1113]: time="2023-10-02T19:09:11.053551059Z" level=info msg="StopPodSandbox for \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\"" Oct 2 19:09:11.055549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50-shm.mount: Deactivated successfully. Oct 2 19:09:11.062000 audit: BPF prog-id=128 op=UNLOAD Oct 2 19:09:11.063468 systemd[1]: cri-containerd-5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50.scope: Deactivated successfully. Oct 2 19:09:11.065189 kernel: kauditd_printk_skb: 16 callbacks suppressed Oct 2 19:09:11.065259 kernel: audit: type=1334 audit(1696273751.062:950): prog-id=128 op=UNLOAD Oct 2 19:09:11.068000 audit: BPF prog-id=131 op=UNLOAD Oct 2 19:09:11.070764 kernel: audit: type=1334 audit(1696273751.068:951): prog-id=131 op=UNLOAD Oct 2 19:09:11.087075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50-rootfs.mount: Deactivated successfully. Oct 2 19:09:11.102394 env[1113]: time="2023-10-02T19:09:11.102325586Z" level=info msg="shim disconnected" id=5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50 Oct 2 19:09:11.102394 env[1113]: time="2023-10-02T19:09:11.102386260Z" level=warning msg="cleaning up after shim disconnected" id=5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50 namespace=k8s.io Oct 2 19:09:11.102394 env[1113]: time="2023-10-02T19:09:11.102398843Z" level=info msg="cleaning up dead shim" Oct 2 19:09:11.110230 env[1113]: time="2023-10-02T19:09:11.110176733Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4278 runtime=io.containerd.runc.v2\n" Oct 2 19:09:11.133448 kubelet[1417]: I1002 19:09:11.133392 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-dxhpg" nodeCondition=["DiskPressure"] Oct 2 19:09:11.159706 systemd-networkd[1020]: calid595cff26a5: Link DOWN Oct 2 19:09:11.159715 systemd-networkd[1020]: calid595cff26a5: Lost carrier Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.158 [INFO][4306] k8s.go 576: Cleaning up netns ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.158 [INFO][4306] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" iface="eth0" netns="/var/run/netns/cni-10642b70-9695-ef56-8322-78a8141aab83" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.158 [INFO][4306] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" iface="eth0" netns="/var/run/netns/cni-10642b70-9695-ef56-8322-78a8141aab83" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.183 [INFO][4306] dataplane_linux.go 569: Deleted device in netns. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" after=24.398459ms iface="eth0" netns="/var/run/netns/cni-10642b70-9695-ef56-8322-78a8141aab83" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.183 [INFO][4306] k8s.go 583: Releasing IP address(es) ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.183 [INFO][4306] utils.go 196: Calico CNI releasing IP address ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.210 [INFO][4313] ipam_plugin.go 416: Releasing address using handleID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:11.279718 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:11.279718 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.270 [INFO][4313] ipam_plugin.go 435: Released address using handleID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.270 [INFO][4313] ipam_plugin.go 444: Releasing address using workloadID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:11.279718 env[1113]: time="2023-10-02T19:09:11Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:11.279718 env[1113]: 2023-10-02 19:09:11.276 [INFO][4306] k8s.go 589: Teardown processing complete. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:11.280328 env[1113]: time="2023-10-02T19:09:11.279970838Z" level=info msg="TearDown network for sandbox \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" successfully" Oct 2 19:09:11.280328 env[1113]: time="2023-10-02T19:09:11.280019429Z" level=info msg="StopPodSandbox for \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" returns successfully" Oct 2 19:09:11.281803 systemd[1]: run-netns-cni\x2d10642b70\x2d9695\x2def56\x2d8322\x2d78a8141aab83.mount: Deactivated successfully. Oct 2 19:09:11.286782 kubelet[1417]: I1002 19:09:11.286750 1417 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-545f75f4b-lfpx8" Oct 2 19:09:11.286948 kubelet[1417]: I1002 19:09:11.286789 1417 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-545f75f4b-lfpx8"] Oct 2 19:09:11.319000 audit[4329]: NETFILTER_CFG table=filter:94 family=2 entries=10 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:11.319000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffc7ec4bb60 a2=0 a3=7ffc7ec4bb4c items=0 ppid=1618 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.327771 kernel: audit: type=1325 audit(1696273751.319:952): table=filter:94 family=2 entries=10 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:11.327860 kernel: audit: type=1300 audit(1696273751.319:952): arch=c000003e syscall=46 success=yes exit=3548 a0=3 a1=7ffc7ec4bb60 a2=0 a3=7ffc7ec4bb4c items=0 ppid=1618 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.327889 kernel: audit: type=1327 audit(1696273751.319:952): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:11.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:11.320000 audit[4329]: NETFILTER_CFG table=nat:95 family=2 entries=20 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:11.320000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffc7ec4bb60 a2=0 a3=7ffc7ec4bb4c items=0 ppid=1618 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.337267 kernel: audit: type=1325 audit(1696273751.320:953): table=nat:95 family=2 entries=20 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:09:11.337395 kernel: audit: type=1300 audit(1696273751.320:953): arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffc7ec4bb60 a2=0 a3=7ffc7ec4bb4c items=0 ppid=1618 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.337431 kernel: audit: type=1327 audit(1696273751.320:953): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:11.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:09:11.336000 audit[4330]: NETFILTER_CFG table=filter:96 family=2 entries=60 op=nft_register_rule pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:11.336000 audit[4330]: SYSCALL arch=c000003e syscall=46 success=yes exit=8264 a0=3 a1=7ffd491f0370 a2=0 a3=7ffd491f035c items=0 ppid=2534 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.346620 kernel: audit: type=1325 audit(1696273751.336:954): table=filter:96 family=2 entries=60 op=nft_register_rule pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:11.346675 kernel: audit: type=1300 audit(1696273751.336:954): arch=c000003e syscall=46 success=yes exit=8264 a0=3 a1=7ffd491f0370 a2=0 a3=7ffd491f035c items=0 ppid=2534 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:11.340000 audit[4330]: NETFILTER_CFG table=filter:97 family=2 entries=9 op=nft_unregister_chain pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:11.340000 audit[4330]: SYSCALL arch=c000003e syscall=46 success=yes exit=1280 a0=3 a1=7ffd491f0370 a2=0 a3=5585a540b000 items=0 ppid=2534 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:11.340000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:11.380918 kubelet[1417]: I1002 19:09:11.380867 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-qbp8r" nodeCondition=["DiskPressure"] Oct 2 19:09:11.424436 kubelet[1417]: I1002 19:09:11.424274 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85nm\" (UniqueName: \"kubernetes.io/projected/923160b0-11c2-47d7-a5f7-1797d0326d64-kube-api-access-g85nm\") pod \"923160b0-11c2-47d7-a5f7-1797d0326d64\" (UID: \"923160b0-11c2-47d7-a5f7-1797d0326d64\") " Oct 2 19:09:11.424436 kubelet[1417]: I1002 19:09:11.424330 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs\") pod \"923160b0-11c2-47d7-a5f7-1797d0326d64\" (UID: \"923160b0-11c2-47d7-a5f7-1797d0326d64\") " Oct 2 19:09:11.427237 kubelet[1417]: I1002 19:09:11.427188 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "923160b0-11c2-47d7-a5f7-1797d0326d64" (UID: "923160b0-11c2-47d7-a5f7-1797d0326d64"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:09:11.427919 kubelet[1417]: I1002 19:09:11.427865 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923160b0-11c2-47d7-a5f7-1797d0326d64-kube-api-access-g85nm" (OuterVolumeSpecName: "kube-api-access-g85nm") pod "923160b0-11c2-47d7-a5f7-1797d0326d64" (UID: "923160b0-11c2-47d7-a5f7-1797d0326d64"). InnerVolumeSpecName "kube-api-access-g85nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:09:11.428764 systemd[1]: var-lib-kubelet-pods-923160b0\x2d11c2\x2d47d7\x2da5f7\x2d1797d0326d64-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 2 19:09:11.428891 systemd[1]: var-lib-kubelet-pods-923160b0\x2d11c2\x2d47d7\x2da5f7\x2d1797d0326d64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg85nm.mount: Deactivated successfully. Oct 2 19:09:11.525302 kubelet[1417]: I1002 19:09:11.525248 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g85nm\" (UniqueName: \"kubernetes.io/projected/923160b0-11c2-47d7-a5f7-1797d0326d64-kube-api-access-g85nm\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:11.525302 kubelet[1417]: I1002 19:09:11.525299 1417 reconciler_common.go:300] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/923160b0-11c2-47d7-a5f7-1797d0326d64-calico-apiserver-certs\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:11.550717 kubelet[1417]: I1002 19:09:11.550410 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-7gbd8" nodeCondition=["DiskPressure"] Oct 2 19:09:11.629134 systemd[1]: Removed slice kubepods-besteffort-pod923160b0_11c2_47d7_a5f7_1797d0326d64.slice. Oct 2 19:09:11.632238 kubelet[1417]: I1002 19:09:11.632194 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-zwzkn" nodeCondition=["DiskPressure"] Oct 2 19:09:11.781031 kubelet[1417]: I1002 19:09:11.780905 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-7xfvt" nodeCondition=["DiskPressure"] Oct 2 19:09:11.828413 kubelet[1417]: I1002 19:09:11.828378 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-8499z" nodeCondition=["DiskPressure"] Oct 2 19:09:11.921231 kubelet[1417]: E1002 19:09:11.921170 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:12.198361 kubelet[1417]: I1002 19:09:12.198318 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-9qhmf" nodeCondition=["DiskPressure"] Oct 2 19:09:12.209461 kubelet[1417]: I1002 19:09:12.209406 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-s8s4c" nodeCondition=["DiskPressure"] Oct 2 19:09:12.282469 kubelet[1417]: I1002 19:09:12.282413 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-gfxjh" nodeCondition=["DiskPressure"] Oct 2 19:09:12.287454 kubelet[1417]: I1002 19:09:12.287386 1417 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-545f75f4b-lfpx8"] Oct 2 19:09:12.307200 kubelet[1417]: I1002 19:09:12.307156 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:12.307200 kubelet[1417]: I1002 19:09:12.307200 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:12.310060 env[1113]: time="2023-10-02T19:09:12.310014908Z" level=info msg="StopPodSandbox for \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\"" Oct 2 19:09:12.381097 kubelet[1417]: I1002 19:09:12.381042 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-2lrdl" nodeCondition=["DiskPressure"] Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.354 [INFO][4349] k8s.go 576: Cleaning up netns ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.354 [INFO][4349] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" iface="eth0" netns="" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.354 [INFO][4349] k8s.go 583: Releasing IP address(es) ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.354 [INFO][4349] utils.go 196: Calico CNI releasing IP address ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.376 [INFO][4356] ipam_plugin.go 416: Releasing address using handleID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.390885 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:12.390885 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.385 [WARNING][4356] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.385 [INFO][4356] ipam_plugin.go 444: Releasing address using workloadID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.390885 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:12.390885 env[1113]: 2023-10-02 19:09:12.389 [INFO][4349] k8s.go 589: Teardown processing complete. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.391364 env[1113]: time="2023-10-02T19:09:12.390936993Z" level=info msg="TearDown network for sandbox \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" successfully" Oct 2 19:09:12.391364 env[1113]: time="2023-10-02T19:09:12.390983961Z" level=info msg="StopPodSandbox for \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" returns successfully" Oct 2 19:09:12.391460 env[1113]: time="2023-10-02T19:09:12.391432644Z" level=info msg="RemovePodSandbox for \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\"" Oct 2 19:09:12.391531 env[1113]: time="2023-10-02T19:09:12.391472319Z" level=info msg="Forcibly stopping sandbox \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\"" Oct 2 19:09:12.437214 kubelet[1417]: I1002 19:09:12.436941 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-545f75f4b-wfzv9" nodeCondition=["DiskPressure"] Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.436 [INFO][4379] k8s.go 576: Cleaning up netns ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.436 [INFO][4379] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" iface="eth0" netns="" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.436 [INFO][4379] k8s.go 583: Releasing IP address(es) ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.436 [INFO][4379] utils.go 196: Calico CNI releasing IP address ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.466 [INFO][4388] ipam_plugin.go 416: Releasing address using handleID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.479265 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:12.479265 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.474 [WARNING][4388] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.475 [INFO][4388] ipam_plugin.go 444: Releasing address using workloadID ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" HandleID="k8s-pod-network.5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Workload="10.0.0.46-k8s-calico--apiserver--545f75f4b--lfpx8-eth0" Oct 2 19:09:12.479265 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:12.479265 env[1113]: 2023-10-02 19:09:12.477 [INFO][4379] k8s.go 589: Teardown processing complete. ContainerID="5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50" Oct 2 19:09:12.479265 env[1113]: time="2023-10-02T19:09:12.479223200Z" level=info msg="TearDown network for sandbox \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" successfully" Oct 2 19:09:12.483403 env[1113]: time="2023-10-02T19:09:12.483357655Z" level=info msg="RemovePodSandbox \"5e1ab024cdaf9feeedc603de3f6a7de26d99697bdaed304d368d75684619cb50\" returns successfully" Oct 2 19:09:12.483956 kubelet[1417]: I1002 19:09:12.483919 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:12.497657 kubelet[1417]: I1002 19:09:12.497619 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:12.497836 kubelet[1417]: I1002 19:09:12.497760 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","default/nginx-deployment-6d5f899847-54ds6","tigera-operator/tigera-operator-8547bd6cc6-d8wl8","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","kube-system/coredns-5dd5756b68-8glxb","kube-system/coredns-5dd5756b68-9jw66","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:12.497836 kubelet[1417]: E1002 19:09:12.497801 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:12.498581 env[1113]: time="2023-10-02T19:09:12.498531234Z" level=info msg="StopContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" with timeout 30 (s)" Oct 2 19:09:12.499044 env[1113]: time="2023-10-02T19:09:12.498981860Z" level=info msg="Stop container \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" with signal quit" Oct 2 19:09:12.519330 systemd[1]: cri-containerd-4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f.scope: Deactivated successfully. Oct 2 19:09:12.518000 audit: BPF prog-id=132 op=UNLOAD Oct 2 19:09:12.523000 audit: BPF prog-id=135 op=UNLOAD Oct 2 19:09:12.537081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f-rootfs.mount: Deactivated successfully. Oct 2 19:09:12.548860 env[1113]: time="2023-10-02T19:09:12.548799867Z" level=info msg="shim disconnected" id=4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f Oct 2 19:09:12.548860 env[1113]: time="2023-10-02T19:09:12.548860872Z" level=warning msg="cleaning up after shim disconnected" id=4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f namespace=k8s.io Oct 2 19:09:12.549078 env[1113]: time="2023-10-02T19:09:12.548872514Z" level=info msg="cleaning up dead shim" Oct 2 19:09:12.555499 env[1113]: time="2023-10-02T19:09:12.555454346Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4414 runtime=io.containerd.runc.v2\n" Oct 2 19:09:12.558932 env[1113]: time="2023-10-02T19:09:12.558897412Z" level=info msg="StopContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" returns successfully" Oct 2 19:09:12.559537 env[1113]: time="2023-10-02T19:09:12.559506817Z" level=info msg="StopPodSandbox for \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\"" Oct 2 19:09:12.559601 env[1113]: time="2023-10-02T19:09:12.559573712Z" level=info msg="Container to stop \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:09:12.561042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f-shm.mount: Deactivated successfully. Oct 2 19:09:12.568095 systemd[1]: cri-containerd-07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f.scope: Deactivated successfully. Oct 2 19:09:12.567000 audit: BPF prog-id=108 op=UNLOAD Oct 2 19:09:12.573000 audit: BPF prog-id=111 op=UNLOAD Oct 2 19:09:12.587804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f-rootfs.mount: Deactivated successfully. Oct 2 19:09:12.594393 env[1113]: time="2023-10-02T19:09:12.594332065Z" level=info msg="shim disconnected" id=07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f Oct 2 19:09:12.594393 env[1113]: time="2023-10-02T19:09:12.594394603Z" level=warning msg="cleaning up after shim disconnected" id=07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f namespace=k8s.io Oct 2 19:09:12.594681 env[1113]: time="2023-10-02T19:09:12.594412356Z" level=info msg="cleaning up dead shim" Oct 2 19:09:12.601870 env[1113]: time="2023-10-02T19:09:12.601807515Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4445 runtime=io.containerd.runc.v2\n" Oct 2 19:09:12.700294 systemd-networkd[1020]: cali975debe7355: Link DOWN Oct 2 19:09:12.700304 systemd-networkd[1020]: cali975debe7355: Lost carrier Oct 2 19:09:12.921440 kubelet[1417]: E1002 19:09:12.921364 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.698 [INFO][4473] k8s.go 576: Cleaning up netns ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.698 [INFO][4473] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" iface="eth0" netns="/var/run/netns/cni-3aa9f337-52eb-ca18-7cf0-5be3a48784c4" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.698 [INFO][4473] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" iface="eth0" netns="/var/run/netns/cni-3aa9f337-52eb-ca18-7cf0-5be3a48784c4" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.714 [INFO][4473] dataplane_linux.go 569: Deleted device in netns. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" after=16.296157ms iface="eth0" netns="/var/run/netns/cni-3aa9f337-52eb-ca18-7cf0-5be3a48784c4" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.715 [INFO][4473] k8s.go 583: Releasing IP address(es) ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.715 [INFO][4473] utils.go 196: Calico CNI releasing IP address ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:12.733 [INFO][4481] ipam_plugin.go 416: Releasing address using handleID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:13.056728 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:13.056728 env[1113]: time="2023-10-02T19:09:12Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:13.050 [INFO][4481] ipam_plugin.go 435: Released address using handleID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:13.050 [INFO][4481] ipam_plugin.go 444: Releasing address using workloadID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:13.056728 env[1113]: time="2023-10-02T19:09:13Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:13.056728 env[1113]: 2023-10-02 19:09:13.055 [INFO][4473] k8s.go 589: Teardown processing complete. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:13.057461 env[1113]: time="2023-10-02T19:09:13.057064257Z" level=info msg="TearDown network for sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" successfully" Oct 2 19:09:13.057461 env[1113]: time="2023-10-02T19:09:13.057115845Z" level=info msg="StopPodSandbox for \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" returns successfully" Oct 2 19:09:13.058575 systemd[1]: run-netns-cni\x2d3aa9f337\x2d52eb\x2dca18\x2d7cf0\x2d5be3a48784c4.mount: Deactivated successfully. Oct 2 19:09:13.063069 kubelet[1417]: I1002 19:09:13.063044 1417 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="default/nginx-deployment-6d5f899847-54ds6" Oct 2 19:09:13.063069 kubelet[1417]: I1002 19:09:13.063075 1417 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["default/nginx-deployment-6d5f899847-54ds6"] Oct 2 19:09:13.093000 audit[4495]: NETFILTER_CFG table=filter:98 family=2 entries=46 op=nft_register_rule pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:13.093000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=6872 a0=3 a1=7ffd3f48e650 a2=0 a3=7ffd3f48e63c items=0 ppid=2534 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:13.093000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:13.093000 audit[4495]: NETFILTER_CFG table=filter:99 family=2 entries=6 op=nft_unregister_chain pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:09:13.093000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=848 a0=3 a1=7ffd3f48e650 a2=0 a3=55d9fb595000 items=0 ppid=2534 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:09:13.093000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:09:13.236386 kubelet[1417]: I1002 19:09:13.236208 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kdlj\" (UniqueName: \"kubernetes.io/projected/486964bf-aef1-40b3-8363-5586f9f415ec-kube-api-access-5kdlj\") pod \"486964bf-aef1-40b3-8363-5586f9f415ec\" (UID: \"486964bf-aef1-40b3-8363-5586f9f415ec\") " Oct 2 19:09:13.239543 kubelet[1417]: I1002 19:09:13.239481 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486964bf-aef1-40b3-8363-5586f9f415ec-kube-api-access-5kdlj" (OuterVolumeSpecName: "kube-api-access-5kdlj") pod "486964bf-aef1-40b3-8363-5586f9f415ec" (UID: "486964bf-aef1-40b3-8363-5586f9f415ec"). InnerVolumeSpecName "kube-api-access-5kdlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:09:13.241233 systemd[1]: var-lib-kubelet-pods-486964bf\x2daef1\x2d40b3\x2d8363\x2d5586f9f415ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kdlj.mount: Deactivated successfully. Oct 2 19:09:13.325764 kubelet[1417]: I1002 19:09:13.325710 1417 scope.go:117] "RemoveContainer" containerID="4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f" Oct 2 19:09:13.327305 env[1113]: time="2023-10-02T19:09:13.327267920Z" level=info msg="RemoveContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\"" Oct 2 19:09:13.329697 systemd[1]: Removed slice kubepods-besteffort-pod486964bf_aef1_40b3_8363_5586f9f415ec.slice. Oct 2 19:09:13.331477 env[1113]: time="2023-10-02T19:09:13.331420068Z" level=info msg="RemoveContainer for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" returns successfully" Oct 2 19:09:13.331828 kubelet[1417]: I1002 19:09:13.331790 1417 scope.go:117] "RemoveContainer" containerID="4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f" Oct 2 19:09:13.332289 env[1113]: time="2023-10-02T19:09:13.332163474Z" level=error msg="ContainerStatus for \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\": not found" Oct 2 19:09:13.332501 kubelet[1417]: E1002 19:09:13.332482 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\": not found" containerID="4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f" Oct 2 19:09:13.332571 kubelet[1417]: I1002 19:09:13.332530 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f"} err="failed to get container status \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d0e727659bd6d027757bb16384fb6b87d49b8c4d4fcb5e75a477fdbb1ee6e1f\": not found" Oct 2 19:09:13.339132 kubelet[1417]: I1002 19:09:13.336985 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5kdlj\" (UniqueName: \"kubernetes.io/projected/486964bf-aef1-40b3-8363-5586f9f415ec-kube-api-access-5kdlj\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:13.922219 kubelet[1417]: E1002 19:09:13.922136 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:14.065660 kubelet[1417]: I1002 19:09:14.064109 1417 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["default/nginx-deployment-6d5f899847-54ds6"] Oct 2 19:09:14.077040 kubelet[1417]: I1002 19:09:14.077000 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:14.077040 kubelet[1417]: I1002 19:09:14.077053 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:14.082480 env[1113]: time="2023-10-02T19:09:14.082425743Z" level=info msg="StopPodSandbox for \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\"" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.207 [INFO][4513] k8s.go 576: Cleaning up netns ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.207 [INFO][4513] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" iface="eth0" netns="" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.207 [INFO][4513] k8s.go 583: Releasing IP address(es) ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.207 [INFO][4513] utils.go 196: Calico CNI releasing IP address ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.230 [INFO][4523] ipam_plugin.go 416: Releasing address using handleID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.243607 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:14.243607 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.237 [WARNING][4523] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.238 [INFO][4523] ipam_plugin.go 444: Releasing address using workloadID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.243607 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:14.243607 env[1113]: 2023-10-02 19:09:14.241 [INFO][4513] k8s.go 589: Teardown processing complete. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.244037 env[1113]: time="2023-10-02T19:09:14.243597336Z" level=info msg="TearDown network for sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" successfully" Oct 2 19:09:14.244037 env[1113]: time="2023-10-02T19:09:14.243635317Z" level=info msg="StopPodSandbox for \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" returns successfully" Oct 2 19:09:14.244672 env[1113]: time="2023-10-02T19:09:14.244627551Z" level=info msg="RemovePodSandbox for \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\"" Oct 2 19:09:14.244672 env[1113]: time="2023-10-02T19:09:14.244659411Z" level=info msg="Forcibly stopping sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\"" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.283 [INFO][4545] k8s.go 576: Cleaning up netns ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.283 [INFO][4545] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" iface="eth0" netns="" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.283 [INFO][4545] k8s.go 583: Releasing IP address(es) ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.283 [INFO][4545] utils.go 196: Calico CNI releasing IP address ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.300 [INFO][4553] ipam_plugin.go 416: Releasing address using handleID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.312540 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:09:14.312540 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.308 [WARNING][4553] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.308 [INFO][4553] ipam_plugin.go 444: Releasing address using workloadID ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" HandleID="k8s-pod-network.07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Workload="10.0.0.46-k8s-nginx--deployment--6d5f899847--54ds6-eth0" Oct 2 19:09:14.312540 env[1113]: time="2023-10-02T19:09:14Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:09:14.312540 env[1113]: 2023-10-02 19:09:14.311 [INFO][4545] k8s.go 589: Teardown processing complete. ContainerID="07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f" Oct 2 19:09:14.312942 env[1113]: time="2023-10-02T19:09:14.312587879Z" level=info msg="TearDown network for sandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" successfully" Oct 2 19:09:14.316637 env[1113]: time="2023-10-02T19:09:14.316607527Z" level=info msg="RemovePodSandbox \"07dfea44d0ed1d880586fae01d7016f5a8fb16e0d454c72123a007ed1026f58f\" returns successfully" Oct 2 19:09:14.317187 kubelet[1417]: I1002 19:09:14.317162 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:14.330939 kubelet[1417]: I1002 19:09:14.330902 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:14.331086 kubelet[1417]: I1002 19:09:14.330995 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","tigera-operator/tigera-operator-8547bd6cc6-d8wl8","kube-system/coredns-5dd5756b68-8glxb","kube-system/coredns-5dd5756b68-9jw66","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:14.331086 kubelet[1417]: E1002 19:09:14.331022 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:14.331658 env[1113]: time="2023-10-02T19:09:14.331619628Z" level=info msg="StopContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" with timeout 30 (s)" Oct 2 19:09:14.332230 env[1113]: time="2023-10-02T19:09:14.332204235Z" level=info msg="Stop container \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" with signal terminated" Oct 2 19:09:14.342028 systemd[1]: cri-containerd-7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da.scope: Deactivated successfully. Oct 2 19:09:14.342399 systemd[1]: cri-containerd-7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da.scope: Consumed 1.011s CPU time. Oct 2 19:09:14.341000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:09:14.348000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:09:14.359241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da-rootfs.mount: Deactivated successfully. Oct 2 19:09:14.463851 env[1113]: time="2023-10-02T19:09:14.463784071Z" level=info msg="shim disconnected" id=7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da Oct 2 19:09:14.463851 env[1113]: time="2023-10-02T19:09:14.463841709Z" level=warning msg="cleaning up after shim disconnected" id=7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da namespace=k8s.io Oct 2 19:09:14.463851 env[1113]: time="2023-10-02T19:09:14.463853221Z" level=info msg="cleaning up dead shim" Oct 2 19:09:14.471324 env[1113]: time="2023-10-02T19:09:14.471267393Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4578 runtime=io.containerd.runc.v2\n" Oct 2 19:09:14.530497 env[1113]: time="2023-10-02T19:09:14.530305136Z" level=info msg="StopContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" returns successfully" Oct 2 19:09:14.531573 env[1113]: time="2023-10-02T19:09:14.531456519Z" level=info msg="StopPodSandbox for \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\"" Oct 2 19:09:14.531573 env[1113]: time="2023-10-02T19:09:14.531617571Z" level=info msg="Container to stop \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:09:14.533510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a-shm.mount: Deactivated successfully. Oct 2 19:09:14.539419 systemd[1]: cri-containerd-a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a.scope: Deactivated successfully. Oct 2 19:09:14.538000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:09:14.543000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:09:14.560040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a-rootfs.mount: Deactivated successfully. Oct 2 19:09:14.575522 env[1113]: time="2023-10-02T19:09:14.575431125Z" level=info msg="shim disconnected" id=a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a Oct 2 19:09:14.575522 env[1113]: time="2023-10-02T19:09:14.575496929Z" level=warning msg="cleaning up after shim disconnected" id=a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a namespace=k8s.io Oct 2 19:09:14.575522 env[1113]: time="2023-10-02T19:09:14.575524470Z" level=info msg="cleaning up dead shim" Oct 2 19:09:14.583420 env[1113]: time="2023-10-02T19:09:14.583357930Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:09:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4609 runtime=io.containerd.runc.v2\n" Oct 2 19:09:14.583756 env[1113]: time="2023-10-02T19:09:14.583708599Z" level=info msg="TearDown network for sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" successfully" Oct 2 19:09:14.583798 env[1113]: time="2023-10-02T19:09:14.583750297Z" level=info msg="StopPodSandbox for \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" returns successfully" Oct 2 19:09:14.588296 kubelet[1417]: I1002 19:09:14.588267 1417 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-8547bd6cc6-d8wl8" Oct 2 19:09:14.588296 kubelet[1417]: I1002 19:09:14.588300 1417 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-8547bd6cc6-d8wl8"] Oct 2 19:09:14.632877 kubelet[1417]: I1002 19:09:14.632775 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2b8xj" nodeCondition=["DiskPressure"] Oct 2 19:09:14.660768 kubelet[1417]: I1002 19:09:14.660383 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x9w7p" nodeCondition=["DiskPressure"] Oct 2 19:09:14.682878 kubelet[1417]: I1002 19:09:14.682816 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gxps8" nodeCondition=["DiskPressure"] Oct 2 19:09:14.712969 kubelet[1417]: I1002 19:09:14.712897 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cf7cn" nodeCondition=["DiskPressure"] Oct 2 19:09:14.745285 kubelet[1417]: I1002 19:09:14.745231 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bprm4\" (UniqueName: \"kubernetes.io/projected/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-kube-api-access-bprm4\") pod \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\" (UID: \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\") " Oct 2 19:09:14.745285 kubelet[1417]: I1002 19:09:14.745288 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-var-lib-calico\") pod \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\" (UID: \"8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4\") " Oct 2 19:09:14.745530 kubelet[1417]: I1002 19:09:14.745377 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4" (UID: "8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:09:14.748118 kubelet[1417]: I1002 19:09:14.748076 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-kube-api-access-bprm4" (OuterVolumeSpecName: "kube-api-access-bprm4") pod "8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4" (UID: "8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4"). InnerVolumeSpecName "kube-api-access-bprm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:09:14.749083 systemd[1]: var-lib-kubelet-pods-8f6e9ca9\x2db2e9\x2d4d52\x2d9c9e\x2d92e73ffba2e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbprm4.mount: Deactivated successfully. Oct 2 19:09:14.846521 kubelet[1417]: I1002 19:09:14.846472 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bprm4\" (UniqueName: \"kubernetes.io/projected/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-kube-api-access-bprm4\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:14.846521 kubelet[1417]: I1002 19:09:14.846512 1417 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f6e9ca9-b2e9-4d52-9c9e-92e73ffba2e4-var-lib-calico\") on node \"10.0.0.46\" DevicePath \"\"" Oct 2 19:09:14.923158 kubelet[1417]: E1002 19:09:14.923117 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:15.002497 kubelet[1417]: I1002 19:09:15.002447 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ndhjz" nodeCondition=["DiskPressure"] Oct 2 19:09:15.031856 kubelet[1417]: I1002 19:09:15.031697 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w9lql" nodeCondition=["DiskPressure"] Oct 2 19:09:15.107125 systemd[1]: Removed slice kubepods-besteffort-pod8f6e9ca9_b2e9_4d52_9c9e_92e73ffba2e4.slice. Oct 2 19:09:15.107242 systemd[1]: kubepods-besteffort-pod8f6e9ca9_b2e9_4d52_9c9e_92e73ffba2e4.slice: Consumed 1.044s CPU time. Oct 2 19:09:15.334761 kubelet[1417]: I1002 19:09:15.334718 1417 scope.go:117] "RemoveContainer" containerID="7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da" Oct 2 19:09:15.335931 env[1113]: time="2023-10-02T19:09:15.335899557Z" level=info msg="RemoveContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\"" Oct 2 19:09:15.369539 env[1113]: time="2023-10-02T19:09:15.369398477Z" level=info msg="RemoveContainer for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" returns successfully" Oct 2 19:09:15.369746 kubelet[1417]: I1002 19:09:15.369709 1417 scope.go:117] "RemoveContainer" containerID="7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da" Oct 2 19:09:15.370231 env[1113]: time="2023-10-02T19:09:15.370132115Z" level=error msg="ContainerStatus for \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\": not found" Oct 2 19:09:15.370373 kubelet[1417]: E1002 19:09:15.370357 1417 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\": not found" containerID="7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da" Oct 2 19:09:15.370438 kubelet[1417]: I1002 19:09:15.370395 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da"} err="failed to get container status \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d4072a997bae3ef149e596d767a44bc56cf86dc9945134a7eb84fdd03a623da\": not found" Oct 2 19:09:15.391088 kubelet[1417]: I1002 19:09:15.391038 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dx2lc" nodeCondition=["DiskPressure"] Oct 2 19:09:15.416037 kubelet[1417]: I1002 19:09:15.415997 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wpsnv" nodeCondition=["DiskPressure"] Oct 2 19:09:15.435900 kubelet[1417]: I1002 19:09:15.435836 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wg8f5" nodeCondition=["DiskPressure"] Oct 2 19:09:15.457404 kubelet[1417]: I1002 19:09:15.457328 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fwzmv" nodeCondition=["DiskPressure"] Oct 2 19:09:15.489561 kubelet[1417]: I1002 19:09:15.489482 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hd8vc" nodeCondition=["DiskPressure"] Oct 2 19:09:15.520202 kubelet[1417]: I1002 19:09:15.520120 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d4fhk" nodeCondition=["DiskPressure"] Oct 2 19:09:15.561187 kubelet[1417]: I1002 19:09:15.561126 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nm89l" nodeCondition=["DiskPressure"] Oct 2 19:09:15.589535 kubelet[1417]: I1002 19:09:15.589416 1417 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-8547bd6cc6-d8wl8"] Oct 2 19:09:15.599717 kubelet[1417]: I1002 19:09:15.599643 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:15.599717 kubelet[1417]: I1002 19:09:15.599698 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:15.601634 env[1113]: time="2023-10-02T19:09:15.601591171Z" level=info msg="StopPodSandbox for \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\"" Oct 2 19:09:15.601779 env[1113]: time="2023-10-02T19:09:15.601692782Z" level=info msg="TearDown network for sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" successfully" Oct 2 19:09:15.602104 env[1113]: time="2023-10-02T19:09:15.601807848Z" level=info msg="StopPodSandbox for \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" returns successfully" Oct 2 19:09:15.602141 env[1113]: time="2023-10-02T19:09:15.602103703Z" level=info msg="RemovePodSandbox for \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\"" Oct 2 19:09:15.602177 env[1113]: time="2023-10-02T19:09:15.602125524Z" level=info msg="Forcibly stopping sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\"" Oct 2 19:09:15.602205 env[1113]: time="2023-10-02T19:09:15.602189905Z" level=info msg="TearDown network for sandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" successfully" Oct 2 19:09:15.606322 env[1113]: time="2023-10-02T19:09:15.606224631Z" level=info msg="RemovePodSandbox \"a9798cba95a202f999e1fb571c62843ad388d21dd69802f410f7f64ad085061a\" returns successfully" Oct 2 19:09:15.606951 kubelet[1417]: I1002 19:09:15.606914 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:15.621401 kubelet[1417]: I1002 19:09:15.621302 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:15.621401 kubelet[1417]: I1002 19:09:15.621391 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","kube-system/coredns-5dd5756b68-8glxb","kube-system/coredns-5dd5756b68-9jw66","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621420 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621432 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621441 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621450 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621458 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:09:15.621642 kubelet[1417]: E1002 19:09:15.621466 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:09:15.621642 kubelet[1417]: I1002 19:09:15.621476 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:09:15.711621 kubelet[1417]: I1002 19:09:15.711543 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-twfnm" nodeCondition=["DiskPressure"] Oct 2 19:09:15.862224 kubelet[1417]: I1002 19:09:15.860999 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-knjxj" nodeCondition=["DiskPressure"] Oct 2 19:09:15.924018 kubelet[1417]: E1002 19:09:15.923885 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:16.050382 kubelet[1417]: I1002 19:09:16.050308 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mz8cd" nodeCondition=["DiskPressure"] Oct 2 19:09:16.104201 env[1113]: time="2023-10-02T19:09:16.104141384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:09:16.160685 kubelet[1417]: I1002 19:09:16.160640 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5wnb8" nodeCondition=["DiskPressure"] Oct 2 19:09:16.311948 kubelet[1417]: I1002 19:09:16.311762 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wmmtt" nodeCondition=["DiskPressure"] Oct 2 19:09:16.376652 env[1113]: time="2023-10-02T19:09:16.376573685Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:16.417924 env[1113]: time="2023-10-02T19:09:16.417815558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:16.418218 kubelet[1417]: E1002 19:09:16.418184 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:09:16.418296 kubelet[1417]: E1002 19:09:16.418249 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:09:16.418400 kubelet[1417]: E1002 19:09:16.418376 1417 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:16.419533 env[1113]: time="2023-10-02T19:09:16.419478981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:09:16.704772 env[1113]: time="2023-10-02T19:09:16.704678092Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:16.790035 env[1113]: time="2023-10-02T19:09:16.789934573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:16.790333 kubelet[1417]: E1002 19:09:16.790301 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:16.790397 kubelet[1417]: E1002 19:09:16.790360 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:16.790543 kubelet[1417]: E1002 19:09:16.790501 1417 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:16.790864 kubelet[1417]: E1002 19:09:16.790579 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:16.924173 kubelet[1417]: E1002 19:09:16.924096 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:16.974432 kubelet[1417]: I1002 19:09:16.974287 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nk6xp" nodeCondition=["DiskPressure"] Oct 2 19:09:17.002599 kubelet[1417]: I1002 19:09:17.002528 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9992c" nodeCondition=["DiskPressure"] Oct 2 19:09:17.040773 kubelet[1417]: I1002 19:09:17.040685 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dnlfg" nodeCondition=["DiskPressure"] Oct 2 19:09:17.057867 kubelet[1417]: I1002 19:09:17.057814 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p2268" nodeCondition=["DiskPressure"] Oct 2 19:09:17.211830 kubelet[1417]: I1002 19:09:17.211777 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f6nxz" nodeCondition=["DiskPressure"] Oct 2 19:09:17.360763 kubelet[1417]: I1002 19:09:17.360712 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mdvm8" nodeCondition=["DiskPressure"] Oct 2 19:09:17.461913 kubelet[1417]: I1002 19:09:17.461851 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-np6fl" nodeCondition=["DiskPressure"] Oct 2 19:09:17.567157 kubelet[1417]: I1002 19:09:17.567090 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nrbxh" nodeCondition=["DiskPressure"] Oct 2 19:09:17.663196 kubelet[1417]: I1002 19:09:17.663032 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q8gbc" nodeCondition=["DiskPressure"] Oct 2 19:09:17.760690 kubelet[1417]: I1002 19:09:17.760622 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-djspw" nodeCondition=["DiskPressure"] Oct 2 19:09:17.863073 kubelet[1417]: I1002 19:09:17.863016 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5j4vc" nodeCondition=["DiskPressure"] Oct 2 19:09:17.924810 kubelet[1417]: E1002 19:09:17.924642 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:17.963039 kubelet[1417]: I1002 19:09:17.962864 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zkxnv" nodeCondition=["DiskPressure"] Oct 2 19:09:18.077074 kubelet[1417]: I1002 19:09:18.077021 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mllkt" nodeCondition=["DiskPressure"] Oct 2 19:09:18.296006 kubelet[1417]: I1002 19:09:18.295812 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tnxwf" nodeCondition=["DiskPressure"] Oct 2 19:09:18.409224 kubelet[1417]: I1002 19:09:18.408977 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2dw46" nodeCondition=["DiskPressure"] Oct 2 19:09:18.519878 kubelet[1417]: I1002 19:09:18.519821 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rxxpf" nodeCondition=["DiskPressure"] Oct 2 19:09:18.567147 kubelet[1417]: I1002 19:09:18.566518 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-br78r" nodeCondition=["DiskPressure"] Oct 2 19:09:18.668312 kubelet[1417]: I1002 19:09:18.668266 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-88p8r" nodeCondition=["DiskPressure"] Oct 2 19:09:18.762208 kubelet[1417]: I1002 19:09:18.762154 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xj4c8" nodeCondition=["DiskPressure"] Oct 2 19:09:18.858059 kubelet[1417]: E1002 19:09:18.857945 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:18.867992 kubelet[1417]: I1002 19:09:18.867934 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ggfjf" nodeCondition=["DiskPressure"] Oct 2 19:09:18.925556 kubelet[1417]: E1002 19:09:18.925448 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:18.965097 kubelet[1417]: I1002 19:09:18.965036 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vg76v" nodeCondition=["DiskPressure"] Oct 2 19:09:19.011292 kubelet[1417]: I1002 19:09:19.011221 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vfpl2" nodeCondition=["DiskPressure"] Oct 2 19:09:19.114186 kubelet[1417]: I1002 19:09:19.114038 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g2v9v" nodeCondition=["DiskPressure"] Oct 2 19:09:19.217289 kubelet[1417]: I1002 19:09:19.217244 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qgggb" nodeCondition=["DiskPressure"] Oct 2 19:09:19.529340 kubelet[1417]: I1002 19:09:19.528887 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tvmvr" nodeCondition=["DiskPressure"] Oct 2 19:09:19.675129 kubelet[1417]: I1002 19:09:19.675053 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9b5m4" nodeCondition=["DiskPressure"] Oct 2 19:09:19.714456 kubelet[1417]: I1002 19:09:19.714391 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b9hks" nodeCondition=["DiskPressure"] Oct 2 19:09:19.812779 kubelet[1417]: I1002 19:09:19.812619 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zlgwv" nodeCondition=["DiskPressure"] Oct 2 19:09:19.926614 kubelet[1417]: E1002 19:09:19.926540 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:20.014047 kubelet[1417]: I1002 19:09:20.013976 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f8svl" nodeCondition=["DiskPressure"] Oct 2 19:09:20.179955 kubelet[1417]: I1002 19:09:20.179839 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-55nr9" nodeCondition=["DiskPressure"] Oct 2 19:09:20.346636 kubelet[1417]: I1002 19:09:20.346222 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hnmnq" nodeCondition=["DiskPressure"] Oct 2 19:09:20.425560 kubelet[1417]: I1002 19:09:20.425494 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bn78r" nodeCondition=["DiskPressure"] Oct 2 19:09:20.529524 kubelet[1417]: I1002 19:09:20.528329 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vkh72" nodeCondition=["DiskPressure"] Oct 2 19:09:20.635136 kubelet[1417]: I1002 19:09:20.635079 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kvfx7" nodeCondition=["DiskPressure"] Oct 2 19:09:20.812702 kubelet[1417]: I1002 19:09:20.808620 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g94hw" nodeCondition=["DiskPressure"] Oct 2 19:09:20.927847 kubelet[1417]: E1002 19:09:20.927761 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:20.946780 kubelet[1417]: I1002 19:09:20.946499 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nnhqm" nodeCondition=["DiskPressure"] Oct 2 19:09:21.051507 kubelet[1417]: I1002 19:09:21.051274 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5zdcg" nodeCondition=["DiskPressure"] Oct 2 19:09:21.321717 kubelet[1417]: I1002 19:09:21.320052 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5gbxv" nodeCondition=["DiskPressure"] Oct 2 19:09:21.452305 kubelet[1417]: I1002 19:09:21.451096 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4bjvf" nodeCondition=["DiskPressure"] Oct 2 19:09:21.607444 kubelet[1417]: I1002 19:09:21.607385 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8k5mh" nodeCondition=["DiskPressure"] Oct 2 19:09:21.740378 kubelet[1417]: I1002 19:09:21.740078 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cnz6c" nodeCondition=["DiskPressure"] Oct 2 19:09:21.914027 kubelet[1417]: I1002 19:09:21.913230 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k4mtb" nodeCondition=["DiskPressure"] Oct 2 19:09:21.929704 kubelet[1417]: E1002 19:09:21.928884 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:22.454387 kubelet[1417]: I1002 19:09:22.453899 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9zdd6" nodeCondition=["DiskPressure"] Oct 2 19:09:22.578066 kubelet[1417]: I1002 19:09:22.578002 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-j5m7s" nodeCondition=["DiskPressure"] Oct 2 19:09:22.680844 kubelet[1417]: I1002 19:09:22.680778 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9xgrn" nodeCondition=["DiskPressure"] Oct 2 19:09:22.777693 kubelet[1417]: I1002 19:09:22.776084 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mbpnc" nodeCondition=["DiskPressure"] Oct 2 19:09:22.876065 kubelet[1417]: I1002 19:09:22.872419 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wnlpz" nodeCondition=["DiskPressure"] Oct 2 19:09:22.932370 kubelet[1417]: E1002 19:09:22.932207 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:23.015609 kubelet[1417]: I1002 19:09:23.015550 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sbnkz" nodeCondition=["DiskPressure"] Oct 2 19:09:23.160352 kubelet[1417]: I1002 19:09:23.160273 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zrx6v" nodeCondition=["DiskPressure"] Oct 2 19:09:23.233140 kubelet[1417]: I1002 19:09:23.231020 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2prbk" nodeCondition=["DiskPressure"] Oct 2 19:09:23.330499 kubelet[1417]: I1002 19:09:23.327211 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xd2fq" nodeCondition=["DiskPressure"] Oct 2 19:09:23.429948 kubelet[1417]: I1002 19:09:23.429764 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pkqbb" nodeCondition=["DiskPressure"] Oct 2 19:09:23.573334 kubelet[1417]: I1002 19:09:23.573011 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tbv45" nodeCondition=["DiskPressure"] Oct 2 19:09:23.933023 kubelet[1417]: E1002 19:09:23.932948 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:24.264142 kubelet[1417]: I1002 19:09:24.263939 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2pndn" nodeCondition=["DiskPressure"] Oct 2 19:09:24.426361 kubelet[1417]: I1002 19:09:24.426287 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-km4rw" nodeCondition=["DiskPressure"] Oct 2 19:09:24.538465 kubelet[1417]: I1002 19:09:24.527350 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kvcqz" nodeCondition=["DiskPressure"] Oct 2 19:09:24.771709 kubelet[1417]: I1002 19:09:24.771612 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8hmkc" nodeCondition=["DiskPressure"] Oct 2 19:09:24.887005 kubelet[1417]: I1002 19:09:24.886820 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2nkz7" nodeCondition=["DiskPressure"] Oct 2 19:09:24.934633 kubelet[1417]: E1002 19:09:24.934511 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:24.954859 kubelet[1417]: I1002 19:09:24.954724 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g5bqv" nodeCondition=["DiskPressure"] Oct 2 19:09:25.033292 kubelet[1417]: I1002 19:09:25.032780 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v8vfl" nodeCondition=["DiskPressure"] Oct 2 19:09:25.130866 kubelet[1417]: I1002 19:09:25.130104 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ch8kw" nodeCondition=["DiskPressure"] Oct 2 19:09:25.574693 kubelet[1417]: I1002 19:09:25.574636 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9ppxb" nodeCondition=["DiskPressure"] Oct 2 19:09:25.672380 kubelet[1417]: I1002 19:09:25.670262 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:25.672380 kubelet[1417]: I1002 19:09:25.670314 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:25.682082 kubelet[1417]: I1002 19:09:25.682053 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:25.691271 kubelet[1417]: I1002 19:09:25.690568 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4pbkd" nodeCondition=["DiskPressure"] Oct 2 19:09:25.718287 kubelet[1417]: I1002 19:09:25.715831 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:25.718287 kubelet[1417]: I1002 19:09:25.715998 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716042 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716064 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716080 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716095 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716107 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:09:25.718287 kubelet[1417]: E1002 19:09:25.716119 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:09:25.718287 kubelet[1417]: I1002 19:09:25.716134 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:09:25.841309 kubelet[1417]: I1002 19:09:25.840893 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ncx82" nodeCondition=["DiskPressure"] Oct 2 19:09:25.936566 kubelet[1417]: E1002 19:09:25.936466 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:26.155391 kubelet[1417]: I1002 19:09:26.154564 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tfr72" nodeCondition=["DiskPressure"] Oct 2 19:09:26.240657 kubelet[1417]: I1002 19:09:26.240360 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vf82n" nodeCondition=["DiskPressure"] Oct 2 19:09:26.355623 kubelet[1417]: I1002 19:09:26.352334 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gnsph" nodeCondition=["DiskPressure"] Oct 2 19:09:26.666148 kubelet[1417]: I1002 19:09:26.664897 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-srhgb" nodeCondition=["DiskPressure"] Oct 2 19:09:26.841023 kubelet[1417]: I1002 19:09:26.840974 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jmr5j" nodeCondition=["DiskPressure"] Oct 2 19:09:26.941432 kubelet[1417]: E1002 19:09:26.937585 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:27.155444 kubelet[1417]: I1002 19:09:27.154679 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dgtt8" nodeCondition=["DiskPressure"] Oct 2 19:09:27.623063 kubelet[1417]: I1002 19:09:27.623014 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zh9d5" nodeCondition=["DiskPressure"] Oct 2 19:09:27.686716 kubelet[1417]: I1002 19:09:27.686635 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x7f55" nodeCondition=["DiskPressure"] Oct 2 19:09:27.764219 kubelet[1417]: I1002 19:09:27.754774 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7s6qk" nodeCondition=["DiskPressure"] Oct 2 19:09:27.806930 kubelet[1417]: I1002 19:09:27.806861 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7p472" nodeCondition=["DiskPressure"] Oct 2 19:09:27.852348 kubelet[1417]: I1002 19:09:27.851092 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qrjvm" nodeCondition=["DiskPressure"] Oct 2 19:09:27.942637 kubelet[1417]: E1002 19:09:27.942460 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:27.963040 kubelet[1417]: I1002 19:09:27.962966 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gd7ws" nodeCondition=["DiskPressure"] Oct 2 19:09:28.101718 kubelet[1417]: E1002 19:09:28.101675 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:09:28.943423 kubelet[1417]: E1002 19:09:28.943366 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:28.949766 kubelet[1417]: I1002 19:09:28.949706 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mb6dt" nodeCondition=["DiskPressure"] Oct 2 19:09:29.228990 systemd[1]: run-containerd-runc-k8s.io-7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3-runc.g8eXWY.mount: Deactivated successfully. Oct 2 19:09:29.236056 kubelet[1417]: I1002 19:09:29.236008 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jc69w" nodeCondition=["DiskPressure"] Oct 2 19:09:29.265172 systemd[1]: run-containerd-runc-k8s.io-7d7c1578698a3bcfd51efb42272944b15ef2a743f2b00ebb69b772d87288cdc3-runc.3MREsA.mount: Deactivated successfully. Oct 2 19:09:29.279920 kubelet[1417]: I1002 19:09:29.279581 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d8fwf" nodeCondition=["DiskPressure"] Oct 2 19:09:29.341192 kubelet[1417]: I1002 19:09:29.338390 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5n5tq" nodeCondition=["DiskPressure"] Oct 2 19:09:29.374950 kubelet[1417]: I1002 19:09:29.374882 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5vvg2" nodeCondition=["DiskPressure"] Oct 2 19:09:29.424040 kubelet[1417]: I1002 19:09:29.423605 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ws5dh" nodeCondition=["DiskPressure"] Oct 2 19:09:29.483187 kubelet[1417]: I1002 19:09:29.483003 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-trllk" nodeCondition=["DiskPressure"] Oct 2 19:09:29.619154 kubelet[1417]: I1002 19:09:29.619072 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rzlxv" nodeCondition=["DiskPressure"] Oct 2 19:09:29.676019 kubelet[1417]: I1002 19:09:29.675465 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4jqzk" nodeCondition=["DiskPressure"] Oct 2 19:09:29.817247 kubelet[1417]: I1002 19:09:29.813940 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m4tx5" nodeCondition=["DiskPressure"] Oct 2 19:09:29.856988 kubelet[1417]: I1002 19:09:29.856470 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b89tr" nodeCondition=["DiskPressure"] Oct 2 19:09:29.908806 kubelet[1417]: I1002 19:09:29.908716 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l49h7" nodeCondition=["DiskPressure"] Oct 2 19:09:29.944609 kubelet[1417]: E1002 19:09:29.944508 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:29.987239 kubelet[1417]: I1002 19:09:29.984129 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hzv8w" nodeCondition=["DiskPressure"] Oct 2 19:09:30.031612 kubelet[1417]: I1002 19:09:30.031245 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x8nqg" nodeCondition=["DiskPressure"] Oct 2 19:09:30.098232 kubelet[1417]: I1002 19:09:30.098166 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fgb7p" nodeCondition=["DiskPressure"] Oct 2 19:09:30.135057 kubelet[1417]: I1002 19:09:30.134979 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qxk97" nodeCondition=["DiskPressure"] Oct 2 19:09:30.180225 kubelet[1417]: I1002 19:09:30.180162 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kx694" nodeCondition=["DiskPressure"] Oct 2 19:09:30.424506 kubelet[1417]: I1002 19:09:30.424364 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sthpq" nodeCondition=["DiskPressure"] Oct 2 19:09:30.472335 systemd[1]: run-containerd-runc-k8s.io-b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94-runc.doVIcg.mount: Deactivated successfully. Oct 2 19:09:30.554124 kubelet[1417]: I1002 19:09:30.554046 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7xfw9" nodeCondition=["DiskPressure"] Oct 2 19:09:30.673233 kubelet[1417]: I1002 19:09:30.673168 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hsr29" nodeCondition=["DiskPressure"] Oct 2 19:09:30.734082 kubelet[1417]: I1002 19:09:30.731678 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bfnkc" nodeCondition=["DiskPressure"] Oct 2 19:09:30.945622 kubelet[1417]: E1002 19:09:30.945572 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:31.105086 kubelet[1417]: E1002 19:09:31.104683 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:31.191292 kubelet[1417]: I1002 19:09:31.189677 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jzphj" nodeCondition=["DiskPressure"] Oct 2 19:09:31.475007 kubelet[1417]: I1002 19:09:31.474407 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tvqxq" nodeCondition=["DiskPressure"] Oct 2 19:09:31.523097 kubelet[1417]: I1002 19:09:31.522960 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4j7m5" nodeCondition=["DiskPressure"] Oct 2 19:09:31.550669 kubelet[1417]: I1002 19:09:31.550606 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sgnnc" nodeCondition=["DiskPressure"] Oct 2 19:09:31.584139 kubelet[1417]: I1002 19:09:31.584078 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5fdhm" nodeCondition=["DiskPressure"] Oct 2 19:09:31.620048 kubelet[1417]: I1002 19:09:31.619972 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-282sj" nodeCondition=["DiskPressure"] Oct 2 19:09:31.855276 kubelet[1417]: I1002 19:09:31.846327 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k987s" nodeCondition=["DiskPressure"] Oct 2 19:09:31.950842 kubelet[1417]: E1002 19:09:31.950631 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:31.975853 kubelet[1417]: I1002 19:09:31.974892 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2bbmh" nodeCondition=["DiskPressure"] Oct 2 19:09:32.088326 kubelet[1417]: I1002 19:09:32.085442 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n67ks" nodeCondition=["DiskPressure"] Oct 2 19:09:32.333618 kubelet[1417]: I1002 19:09:32.332558 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6nqqg" nodeCondition=["DiskPressure"] Oct 2 19:09:32.436678 kubelet[1417]: I1002 19:09:32.436621 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7wrk5" nodeCondition=["DiskPressure"] Oct 2 19:09:32.762642 kubelet[1417]: I1002 19:09:32.760708 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rh5kt" nodeCondition=["DiskPressure"] Oct 2 19:09:32.817759 kubelet[1417]: I1002 19:09:32.817360 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-s9tw2" nodeCondition=["DiskPressure"] Oct 2 19:09:32.951599 kubelet[1417]: E1002 19:09:32.951495 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:33.276121 kubelet[1417]: I1002 19:09:33.276078 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qs8wg" nodeCondition=["DiskPressure"] Oct 2 19:09:33.368148 kubelet[1417]: I1002 19:09:33.368059 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mdxwg" nodeCondition=["DiskPressure"] Oct 2 19:09:33.401036 kubelet[1417]: I1002 19:09:33.400966 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gm5gd" nodeCondition=["DiskPressure"] Oct 2 19:09:33.427395 kubelet[1417]: I1002 19:09:33.427315 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pfhpt" nodeCondition=["DiskPressure"] Oct 2 19:09:33.593991 kubelet[1417]: I1002 19:09:33.593913 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mmx69" nodeCondition=["DiskPressure"] Oct 2 19:09:33.645655 kubelet[1417]: I1002 19:09:33.645606 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7fwqr" nodeCondition=["DiskPressure"] Oct 2 19:09:33.737430 kubelet[1417]: I1002 19:09:33.737362 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wksb8" nodeCondition=["DiskPressure"] Oct 2 19:09:33.781567 kubelet[1417]: I1002 19:09:33.779292 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wrqnc" nodeCondition=["DiskPressure"] Oct 2 19:09:33.880658 kubelet[1417]: I1002 19:09:33.876758 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tbdcd" nodeCondition=["DiskPressure"] Oct 2 19:09:33.952811 kubelet[1417]: E1002 19:09:33.952686 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:34.073702 kubelet[1417]: I1002 19:09:34.073616 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z8c6f" nodeCondition=["DiskPressure"] Oct 2 19:09:34.189278 kubelet[1417]: I1002 19:09:34.189115 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-frf5m" nodeCondition=["DiskPressure"] Oct 2 19:09:34.781528 kubelet[1417]: I1002 19:09:34.781463 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pj96j" nodeCondition=["DiskPressure"] Oct 2 19:09:34.858159 kubelet[1417]: I1002 19:09:34.858080 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tww5f" nodeCondition=["DiskPressure"] Oct 2 19:09:34.918450 kubelet[1417]: I1002 19:09:34.918372 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mmbkp" nodeCondition=["DiskPressure"] Oct 2 19:09:34.961368 kubelet[1417]: E1002 19:09:34.955126 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:34.976142 kubelet[1417]: I1002 19:09:34.974968 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4cpvc" nodeCondition=["DiskPressure"] Oct 2 19:09:35.441772 kubelet[1417]: I1002 19:09:35.441677 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fdcrq" nodeCondition=["DiskPressure"] Oct 2 19:09:35.744125 kubelet[1417]: I1002 19:09:35.743978 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:35.744125 kubelet[1417]: I1002 19:09:35.744023 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:35.748707 kubelet[1417]: I1002 19:09:35.748570 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:35.806006 kubelet[1417]: I1002 19:09:35.805853 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:35.832917 kubelet[1417]: I1002 19:09:35.815404 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815588 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815621 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815823 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815847 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815910 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:09:35.832917 kubelet[1417]: E1002 19:09:35.815928 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:09:35.832917 kubelet[1417]: I1002 19:09:35.815942 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:09:35.929444 kubelet[1417]: I1002 19:09:35.928461 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m69mc" nodeCondition=["DiskPressure"] Oct 2 19:09:35.959564 kubelet[1417]: E1002 19:09:35.959447 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:36.340378 kubelet[1417]: I1002 19:09:36.340306 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n8gtr" nodeCondition=["DiskPressure"] Oct 2 19:09:36.524561 kubelet[1417]: I1002 19:09:36.524500 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5p8px" nodeCondition=["DiskPressure"] Oct 2 19:09:36.869116 kubelet[1417]: I1002 19:09:36.868575 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x5gcg" nodeCondition=["DiskPressure"] Oct 2 19:09:36.960566 kubelet[1417]: E1002 19:09:36.960438 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:37.175302 kubelet[1417]: I1002 19:09:37.175117 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q8zzc" nodeCondition=["DiskPressure"] Oct 2 19:09:37.493452 kubelet[1417]: I1002 19:09:37.493222 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-22mkk" nodeCondition=["DiskPressure"] Oct 2 19:09:37.711543 kubelet[1417]: I1002 19:09:37.711492 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-92rmt" nodeCondition=["DiskPressure"] Oct 2 19:09:37.896332 kubelet[1417]: I1002 19:09:37.896256 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xm4n5" nodeCondition=["DiskPressure"] Oct 2 19:09:37.961320 kubelet[1417]: E1002 19:09:37.961251 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:38.166021 kubelet[1417]: I1002 19:09:38.165867 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qtss6" nodeCondition=["DiskPressure"] Oct 2 19:09:38.265012 kubelet[1417]: I1002 19:09:38.263090 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wql27" nodeCondition=["DiskPressure"] Oct 2 19:09:38.352534 kubelet[1417]: I1002 19:09:38.352452 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9pnl8" nodeCondition=["DiskPressure"] Oct 2 19:09:38.741987 kubelet[1417]: I1002 19:09:38.741209 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6h4mr" nodeCondition=["DiskPressure"] Oct 2 19:09:38.858185 kubelet[1417]: E1002 19:09:38.858112 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:38.961837 kubelet[1417]: E1002 19:09:38.961768 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:38.988297 kubelet[1417]: I1002 19:09:38.988178 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dqvdz" nodeCondition=["DiskPressure"] Oct 2 19:09:39.027117 kubelet[1417]: I1002 19:09:39.026958 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v49xl" nodeCondition=["DiskPressure"] Oct 2 19:09:39.051002 kubelet[1417]: I1002 19:09:39.050948 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-msdcv" nodeCondition=["DiskPressure"] Oct 2 19:09:39.089181 kubelet[1417]: I1002 19:09:39.089119 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8htg6" nodeCondition=["DiskPressure"] Oct 2 19:09:39.125418 kubelet[1417]: I1002 19:09:39.125323 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-45qc4" nodeCondition=["DiskPressure"] Oct 2 19:09:39.158281 kubelet[1417]: I1002 19:09:39.158206 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kwskw" nodeCondition=["DiskPressure"] Oct 2 19:09:39.184236 kubelet[1417]: I1002 19:09:39.184170 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g5q7n" nodeCondition=["DiskPressure"] Oct 2 19:09:39.966491 kubelet[1417]: E1002 19:09:39.962662 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:39.974419 kubelet[1417]: I1002 19:09:39.974337 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lmhpq" nodeCondition=["DiskPressure"] Oct 2 19:09:40.006483 kubelet[1417]: I1002 19:09:40.006419 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tsggc" nodeCondition=["DiskPressure"] Oct 2 19:09:40.039667 kubelet[1417]: I1002 19:09:40.039590 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nbsdz" nodeCondition=["DiskPressure"] Oct 2 19:09:40.074415 kubelet[1417]: I1002 19:09:40.074339 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x87js" nodeCondition=["DiskPressure"] Oct 2 19:09:40.104192 kubelet[1417]: I1002 19:09:40.104118 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tnxw2" nodeCondition=["DiskPressure"] Oct 2 19:09:40.132696 kubelet[1417]: I1002 19:09:40.132639 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q6rf2" nodeCondition=["DiskPressure"] Oct 2 19:09:40.164896 kubelet[1417]: I1002 19:09:40.164825 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tlm6w" nodeCondition=["DiskPressure"] Oct 2 19:09:40.193439 kubelet[1417]: I1002 19:09:40.193377 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-thqlh" nodeCondition=["DiskPressure"] Oct 2 19:09:40.223213 kubelet[1417]: I1002 19:09:40.223035 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dvllj" nodeCondition=["DiskPressure"] Oct 2 19:09:40.256304 kubelet[1417]: I1002 19:09:40.256227 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rvsn8" nodeCondition=["DiskPressure"] Oct 2 19:09:40.361512 kubelet[1417]: I1002 19:09:40.361435 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5mdj4" nodeCondition=["DiskPressure"] Oct 2 19:09:40.467197 kubelet[1417]: I1002 19:09:40.467129 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hwp6s" nodeCondition=["DiskPressure"] Oct 2 19:09:40.480781 systemd[1]: run-containerd-runc-k8s.io-b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94-runc.4Ax9Ut.mount: Deactivated successfully. Oct 2 19:09:40.568121 kubelet[1417]: I1002 19:09:40.567804 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ljh88" nodeCondition=["DiskPressure"] Oct 2 19:09:40.807274 kubelet[1417]: I1002 19:09:40.806166 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wdvg5" nodeCondition=["DiskPressure"] Oct 2 19:09:40.831132 kubelet[1417]: I1002 19:09:40.831064 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-55xb5" nodeCondition=["DiskPressure"] Oct 2 19:09:40.908856 kubelet[1417]: I1002 19:09:40.908791 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8nw8v" nodeCondition=["DiskPressure"] Oct 2 19:09:40.967135 kubelet[1417]: E1002 19:09:40.967067 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:41.009608 kubelet[1417]: I1002 19:09:41.009515 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-48f2b" nodeCondition=["DiskPressure"] Oct 2 19:09:41.110067 kubelet[1417]: I1002 19:09:41.109976 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gszzh" nodeCondition=["DiskPressure"] Oct 2 19:09:41.213505 kubelet[1417]: I1002 19:09:41.212592 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2tdbf" nodeCondition=["DiskPressure"] Oct 2 19:09:41.428888 kubelet[1417]: I1002 19:09:41.427843 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r65xf" nodeCondition=["DiskPressure"] Oct 2 19:09:41.531389 kubelet[1417]: I1002 19:09:41.531312 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-j4hj6" nodeCondition=["DiskPressure"] Oct 2 19:09:41.644152 kubelet[1417]: I1002 19:09:41.633102 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9vt7h" nodeCondition=["DiskPressure"] Oct 2 19:09:41.721379 kubelet[1417]: I1002 19:09:41.720996 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r84n8" nodeCondition=["DiskPressure"] Oct 2 19:09:41.967663 kubelet[1417]: E1002 19:09:41.967590 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:42.393594 kubelet[1417]: I1002 19:09:42.393511 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vtwhv" nodeCondition=["DiskPressure"] Oct 2 19:09:42.421091 kubelet[1417]: I1002 19:09:42.421028 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vxtjz" nodeCondition=["DiskPressure"] Oct 2 19:09:42.449272 kubelet[1417]: I1002 19:09:42.449200 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tkzx4" nodeCondition=["DiskPressure"] Oct 2 19:09:42.476575 kubelet[1417]: I1002 19:09:42.476507 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zzx5t" nodeCondition=["DiskPressure"] Oct 2 19:09:42.509916 kubelet[1417]: I1002 19:09:42.509833 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pg682" nodeCondition=["DiskPressure"] Oct 2 19:09:42.539302 kubelet[1417]: I1002 19:09:42.539221 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dqbbv" nodeCondition=["DiskPressure"] Oct 2 19:09:42.665340 kubelet[1417]: I1002 19:09:42.664891 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xq24n" nodeCondition=["DiskPressure"] Oct 2 19:09:42.968770 kubelet[1417]: E1002 19:09:42.968608 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:43.018050 kubelet[1417]: I1002 19:09:43.017963 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4vllb" nodeCondition=["DiskPressure"] Oct 2 19:09:43.042484 kubelet[1417]: I1002 19:09:43.042412 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lh5kj" nodeCondition=["DiskPressure"] Oct 2 19:09:43.069112 kubelet[1417]: I1002 19:09:43.069050 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vctb5" nodeCondition=["DiskPressure"] Oct 2 19:09:43.159357 kubelet[1417]: I1002 19:09:43.159270 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hsnsx" nodeCondition=["DiskPressure"] Oct 2 19:09:43.526680 kubelet[1417]: I1002 19:09:43.526607 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h7hcl" nodeCondition=["DiskPressure"] Oct 2 19:09:43.552981 kubelet[1417]: I1002 19:09:43.552921 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rw2mw" nodeCondition=["DiskPressure"] Oct 2 19:09:43.658844 kubelet[1417]: I1002 19:09:43.658771 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9fk7h" nodeCondition=["DiskPressure"] Oct 2 19:09:43.778690 kubelet[1417]: I1002 19:09:43.778487 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fdzjn" nodeCondition=["DiskPressure"] Oct 2 19:09:43.963383 kubelet[1417]: I1002 19:09:43.963322 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xwj6x" nodeCondition=["DiskPressure"] Oct 2 19:09:43.968983 kubelet[1417]: E1002 19:09:43.968924 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:44.170243 kubelet[1417]: I1002 19:09:44.170163 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x4zxw" nodeCondition=["DiskPressure"] Oct 2 19:09:44.209963 kubelet[1417]: I1002 19:09:44.209895 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dhxc5" nodeCondition=["DiskPressure"] Oct 2 19:09:44.399829 kubelet[1417]: I1002 19:09:44.399716 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9sfq5" nodeCondition=["DiskPressure"] Oct 2 19:09:44.508636 kubelet[1417]: I1002 19:09:44.508441 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hrmrl" nodeCondition=["DiskPressure"] Oct 2 19:09:44.681544 kubelet[1417]: I1002 19:09:44.681487 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rwzvq" nodeCondition=["DiskPressure"] Oct 2 19:09:44.710504 kubelet[1417]: I1002 19:09:44.710446 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2hh8v" nodeCondition=["DiskPressure"] Oct 2 19:09:44.966707 kubelet[1417]: I1002 19:09:44.966641 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mn5ht" nodeCondition=["DiskPressure"] Oct 2 19:09:44.969021 kubelet[1417]: E1002 19:09:44.969008 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:45.102414 env[1113]: time="2023-10-02T19:09:45.102363310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:09:45.155046 kubelet[1417]: I1002 19:09:45.154996 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x8rdp" nodeCondition=["DiskPressure"] Oct 2 19:09:45.165637 kubelet[1417]: I1002 19:09:45.165587 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8547bd6cc6-x8rdp" podStartSLOduration=0.165525791 podCreationTimestamp="2023-10-02 19:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:09:45.162060048 +0000 UTC m=+106.664098001" watchObservedRunningTime="2023-10-02 19:09:45.165525791 +0000 UTC m=+106.667563734" Oct 2 19:09:45.211190 kubelet[1417]: I1002 19:09:45.211127 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lwlph" nodeCondition=["DiskPressure"] Oct 2 19:09:45.308511 kubelet[1417]: I1002 19:09:45.308028 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7tnr7" nodeCondition=["DiskPressure"] Oct 2 19:09:45.359256 env[1113]: time="2023-10-02T19:09:45.359164352Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:45.364776 env[1113]: time="2023-10-02T19:09:45.364651616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:45.365032 kubelet[1417]: E1002 19:09:45.365001 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:09:45.365183 kubelet[1417]: E1002 19:09:45.365060 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:09:45.365232 kubelet[1417]: E1002 19:09:45.365211 1417 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:45.366129 env[1113]: time="2023-10-02T19:09:45.366102783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:09:45.408620 kubelet[1417]: I1002 19:09:45.408568 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ht9pn" nodeCondition=["DiskPressure"] Oct 2 19:09:45.511362 kubelet[1417]: I1002 19:09:45.511312 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xv7r4" nodeCondition=["DiskPressure"] Oct 2 19:09:45.608100 kubelet[1417]: I1002 19:09:45.608033 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lc6zv" nodeCondition=["DiskPressure"] Oct 2 19:09:45.617169 env[1113]: time="2023-10-02T19:09:45.617085347Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:09:45.618392 env[1113]: time="2023-10-02T19:09:45.618328633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:09:45.618770 kubelet[1417]: E1002 19:09:45.618706 1417 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:45.618770 kubelet[1417]: E1002 19:09:45.618777 1417 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:09:45.619035 kubelet[1417]: E1002 19:09:45.618874 1417 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v7j8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2ckzv_calico-system(20101097-40e7-4d0a-a992-23f4379dc0f4): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:09:45.619035 kubelet[1417]: E1002 19:09:45.618931 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:45.711085 kubelet[1417]: I1002 19:09:45.710852 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-s6trb" nodeCondition=["DiskPressure"] Oct 2 19:09:45.810159 kubelet[1417]: I1002 19:09:45.810078 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h5r8w" nodeCondition=["DiskPressure"] Oct 2 19:09:45.831393 kubelet[1417]: I1002 19:09:45.831354 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:45.831393 kubelet[1417]: I1002 19:09:45.831400 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:45.833180 kubelet[1417]: I1002 19:09:45.833127 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:45.843805 kubelet[1417]: I1002 19:09:45.843760 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:45.843993 kubelet[1417]: I1002 19:09:45.843838 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843868 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843880 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843889 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843898 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843906 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:09:45.843993 kubelet[1417]: E1002 19:09:45.843915 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:09:45.843993 kubelet[1417]: I1002 19:09:45.843924 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:09:45.916837 kubelet[1417]: I1002 19:09:45.915956 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2s9jc" nodeCondition=["DiskPressure"] Oct 2 19:09:45.969377 kubelet[1417]: E1002 19:09:45.969294 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:46.108303 kubelet[1417]: I1002 19:09:46.108248 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sdrck" nodeCondition=["DiskPressure"] Oct 2 19:09:46.216674 kubelet[1417]: I1002 19:09:46.215335 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pl8d5" nodeCondition=["DiskPressure"] Oct 2 19:09:46.310500 kubelet[1417]: I1002 19:09:46.310402 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8glq7" nodeCondition=["DiskPressure"] Oct 2 19:09:46.419447 kubelet[1417]: I1002 19:09:46.419383 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-j2jfj" nodeCondition=["DiskPressure"] Oct 2 19:09:46.511518 kubelet[1417]: I1002 19:09:46.511341 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rzbsw" nodeCondition=["DiskPressure"] Oct 2 19:09:46.615084 kubelet[1417]: I1002 19:09:46.615012 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k2hhm" nodeCondition=["DiskPressure"] Oct 2 19:09:46.710704 kubelet[1417]: I1002 19:09:46.710637 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x778g" nodeCondition=["DiskPressure"] Oct 2 19:09:46.816628 kubelet[1417]: I1002 19:09:46.816445 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t7gf6" nodeCondition=["DiskPressure"] Oct 2 19:09:46.910725 kubelet[1417]: I1002 19:09:46.910640 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2z87m" nodeCondition=["DiskPressure"] Oct 2 19:09:46.969776 kubelet[1417]: E1002 19:09:46.969681 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:47.117590 kubelet[1417]: I1002 19:09:47.114096 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7999j" nodeCondition=["DiskPressure"] Oct 2 19:09:47.229731 kubelet[1417]: I1002 19:09:47.229660 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bvv24" nodeCondition=["DiskPressure"] Oct 2 19:09:47.321703 kubelet[1417]: I1002 19:09:47.320104 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9ggkv" nodeCondition=["DiskPressure"] Oct 2 19:09:47.513075 kubelet[1417]: I1002 19:09:47.512875 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mb7kd" nodeCondition=["DiskPressure"] Oct 2 19:09:47.619575 kubelet[1417]: I1002 19:09:47.618943 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8lbxx" nodeCondition=["DiskPressure"] Oct 2 19:09:47.725538 kubelet[1417]: I1002 19:09:47.725467 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qs9tx" nodeCondition=["DiskPressure"] Oct 2 19:09:47.835024 kubelet[1417]: I1002 19:09:47.816619 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5bpkm" nodeCondition=["DiskPressure"] Oct 2 19:09:47.968109 kubelet[1417]: I1002 19:09:47.968032 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vwbs6" nodeCondition=["DiskPressure"] Oct 2 19:09:47.971008 kubelet[1417]: E1002 19:09:47.970949 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:48.013364 kubelet[1417]: I1002 19:09:48.013302 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kdrh2" nodeCondition=["DiskPressure"] Oct 2 19:09:48.114391 kubelet[1417]: I1002 19:09:48.112491 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8pbzv" nodeCondition=["DiskPressure"] Oct 2 19:09:48.412772 kubelet[1417]: I1002 19:09:48.412612 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9lqfb" nodeCondition=["DiskPressure"] Oct 2 19:09:48.492410 kubelet[1417]: I1002 19:09:48.492343 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-djkfd" nodeCondition=["DiskPressure"] Oct 2 19:09:48.523033 kubelet[1417]: I1002 19:09:48.522971 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9mkkt" nodeCondition=["DiskPressure"] Oct 2 19:09:48.838594 kubelet[1417]: I1002 19:09:48.838528 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m5vps" nodeCondition=["DiskPressure"] Oct 2 19:09:48.971635 kubelet[1417]: E1002 19:09:48.971571 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:49.000994 kubelet[1417]: I1002 19:09:49.000896 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lkzxl" nodeCondition=["DiskPressure"] Oct 2 19:09:49.035692 kubelet[1417]: I1002 19:09:49.035623 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f522f" nodeCondition=["DiskPressure"] Oct 2 19:09:49.158681 kubelet[1417]: I1002 19:09:49.158345 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9sdlz" nodeCondition=["DiskPressure"] Oct 2 19:09:49.262450 kubelet[1417]: I1002 19:09:49.262390 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gff5b" nodeCondition=["DiskPressure"] Oct 2 19:09:49.359432 kubelet[1417]: I1002 19:09:49.359339 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kfq8h" nodeCondition=["DiskPressure"] Oct 2 19:09:49.464420 kubelet[1417]: I1002 19:09:49.464247 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zr8cr" nodeCondition=["DiskPressure"] Oct 2 19:09:49.558920 kubelet[1417]: I1002 19:09:49.558855 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9hpw8" nodeCondition=["DiskPressure"] Oct 2 19:09:49.608090 kubelet[1417]: I1002 19:09:49.608023 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fvrgz" nodeCondition=["DiskPressure"] Oct 2 19:09:49.952816 kubelet[1417]: I1002 19:09:49.952759 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4bprm" nodeCondition=["DiskPressure"] Oct 2 19:09:49.971859 kubelet[1417]: E1002 19:09:49.971815 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:50.203552 kubelet[1417]: I1002 19:09:50.203220 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5j772" nodeCondition=["DiskPressure"] Oct 2 19:09:50.239252 kubelet[1417]: I1002 19:09:50.239178 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b66cj" nodeCondition=["DiskPressure"] Oct 2 19:09:50.298788 kubelet[1417]: I1002 19:09:50.298681 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m6sj2" nodeCondition=["DiskPressure"] Oct 2 19:09:50.337385 kubelet[1417]: I1002 19:09:50.335527 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wbhdf" nodeCondition=["DiskPressure"] Oct 2 19:09:50.483141 systemd[1]: run-containerd-runc-k8s.io-b0c9632595cb277555697f00d839fdf7dfefe5054a42eab3acdbdc2152710e94-runc.VxeFE5.mount: Deactivated successfully. Oct 2 19:09:50.571869 kubelet[1417]: I1002 19:09:50.571806 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kw769" nodeCondition=["DiskPressure"] Oct 2 19:09:50.669132 kubelet[1417]: I1002 19:09:50.669049 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k5sjp" nodeCondition=["DiskPressure"] Oct 2 19:09:50.838695 kubelet[1417]: I1002 19:09:50.836953 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p7kc8" nodeCondition=["DiskPressure"] Oct 2 19:09:50.914452 kubelet[1417]: I1002 19:09:50.914375 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-j62tf" nodeCondition=["DiskPressure"] Oct 2 19:09:50.972907 kubelet[1417]: E1002 19:09:50.972845 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:51.019836 kubelet[1417]: I1002 19:09:51.019759 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-78vxt" nodeCondition=["DiskPressure"] Oct 2 19:09:51.211269 kubelet[1417]: I1002 19:09:51.210906 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-brkcz" nodeCondition=["DiskPressure"] Oct 2 19:09:51.317201 kubelet[1417]: I1002 19:09:51.317132 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nlzq6" nodeCondition=["DiskPressure"] Oct 2 19:09:51.408968 kubelet[1417]: I1002 19:09:51.408908 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q9qvh" nodeCondition=["DiskPressure"] Oct 2 19:09:51.514783 kubelet[1417]: I1002 19:09:51.514415 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fn6xj" nodeCondition=["DiskPressure"] Oct 2 19:09:51.789992 kubelet[1417]: I1002 19:09:51.789598 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t44qx" nodeCondition=["DiskPressure"] Oct 2 19:09:51.821347 kubelet[1417]: I1002 19:09:51.821274 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-68xtn" nodeCondition=["DiskPressure"] Oct 2 19:09:51.867474 kubelet[1417]: I1002 19:09:51.867063 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g276k" nodeCondition=["DiskPressure"] Oct 2 19:09:51.973704 kubelet[1417]: E1002 19:09:51.973638 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:51.986382 kubelet[1417]: I1002 19:09:51.986326 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2mnlc" nodeCondition=["DiskPressure"] Oct 2 19:09:52.072776 kubelet[1417]: I1002 19:09:52.072060 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h2kgj" nodeCondition=["DiskPressure"] Oct 2 19:09:52.175494 kubelet[1417]: I1002 19:09:52.175400 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6vvjt" nodeCondition=["DiskPressure"] Oct 2 19:09:52.299750 kubelet[1417]: I1002 19:09:52.288724 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fh6sr" nodeCondition=["DiskPressure"] Oct 2 19:09:52.355279 kubelet[1417]: I1002 19:09:52.354466 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qt88n" nodeCondition=["DiskPressure"] Oct 2 19:09:52.484534 kubelet[1417]: I1002 19:09:52.484423 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-szqxj" nodeCondition=["DiskPressure"] Oct 2 19:09:52.663035 kubelet[1417]: I1002 19:09:52.651862 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lj87p" nodeCondition=["DiskPressure"] Oct 2 19:09:52.777560 kubelet[1417]: I1002 19:09:52.777150 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rkjzh" nodeCondition=["DiskPressure"] Oct 2 19:09:52.883179 kubelet[1417]: I1002 19:09:52.882338 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vwdpb" nodeCondition=["DiskPressure"] Oct 2 19:09:52.967254 kubelet[1417]: I1002 19:09:52.967066 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rx2hk" nodeCondition=["DiskPressure"] Oct 2 19:09:52.980441 kubelet[1417]: E1002 19:09:52.973916 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:53.013974 kubelet[1417]: I1002 19:09:53.013906 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zv8xd" nodeCondition=["DiskPressure"] Oct 2 19:09:53.121188 kubelet[1417]: I1002 19:09:53.121108 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pmln2" nodeCondition=["DiskPressure"] Oct 2 19:09:53.332464 kubelet[1417]: I1002 19:09:53.332398 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jbfpq" nodeCondition=["DiskPressure"] Oct 2 19:09:53.412670 kubelet[1417]: I1002 19:09:53.412610 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-snwnq" nodeCondition=["DiskPressure"] Oct 2 19:09:53.464513 kubelet[1417]: I1002 19:09:53.464432 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b69kk" nodeCondition=["DiskPressure"] Oct 2 19:09:53.774502 kubelet[1417]: I1002 19:09:53.774310 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8c5j9" nodeCondition=["DiskPressure"] Oct 2 19:09:53.810707 kubelet[1417]: I1002 19:09:53.810639 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l6k7x" nodeCondition=["DiskPressure"] Oct 2 19:09:53.861127 kubelet[1417]: I1002 19:09:53.861018 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7569p" nodeCondition=["DiskPressure"] Oct 2 19:09:53.980762 kubelet[1417]: E1002 19:09:53.980683 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:54.116252 kubelet[1417]: I1002 19:09:54.116151 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-svb4r" nodeCondition=["DiskPressure"] Oct 2 19:09:54.146137 kubelet[1417]: I1002 19:09:54.146018 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jjpkg" nodeCondition=["DiskPressure"] Oct 2 19:09:54.211147 kubelet[1417]: I1002 19:09:54.211087 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6wrsl" nodeCondition=["DiskPressure"] Oct 2 19:09:54.315087 kubelet[1417]: I1002 19:09:54.315026 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4zf5s" nodeCondition=["DiskPressure"] Oct 2 19:09:54.411442 kubelet[1417]: I1002 19:09:54.411086 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m9kww" nodeCondition=["DiskPressure"] Oct 2 19:09:54.513750 kubelet[1417]: I1002 19:09:54.513655 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qjd9n" nodeCondition=["DiskPressure"] Oct 2 19:09:54.854317 kubelet[1417]: I1002 19:09:54.854253 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vxtgt" nodeCondition=["DiskPressure"] Oct 2 19:09:54.890322 kubelet[1417]: I1002 19:09:54.890257 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mcldq" nodeCondition=["DiskPressure"] Oct 2 19:09:54.914024 kubelet[1417]: I1002 19:09:54.913958 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9gsbp" nodeCondition=["DiskPressure"] Oct 2 19:09:54.981922 kubelet[1417]: E1002 19:09:54.981697 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:55.016598 kubelet[1417]: I1002 19:09:55.016531 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8wj5z" nodeCondition=["DiskPressure"] Oct 2 19:09:55.112283 kubelet[1417]: I1002 19:09:55.111961 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ld592" nodeCondition=["DiskPressure"] Oct 2 19:09:55.579762 kubelet[1417]: I1002 19:09:55.579706 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bz7x2" nodeCondition=["DiskPressure"] Oct 2 19:09:55.856276 kubelet[1417]: I1002 19:09:55.856171 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:55.856276 kubelet[1417]: I1002 19:09:55.856207 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:09:55.857579 kubelet[1417]: I1002 19:09:55.857545 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:09:55.866211 kubelet[1417]: I1002 19:09:55.866191 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:09:55.866296 kubelet[1417]: I1002 19:09:55.866277 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:09:55.866334 kubelet[1417]: E1002 19:09:55.866324 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:09:55.866359 kubelet[1417]: E1002 19:09:55.866343 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:09:55.866359 kubelet[1417]: E1002 19:09:55.866353 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:09:55.866410 kubelet[1417]: E1002 19:09:55.866361 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:09:55.866410 kubelet[1417]: E1002 19:09:55.866372 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:09:55.866410 kubelet[1417]: E1002 19:09:55.866382 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:09:55.866410 kubelet[1417]: I1002 19:09:55.866392 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:09:55.928182 kubelet[1417]: I1002 19:09:55.928119 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8l5zd" nodeCondition=["DiskPressure"] Oct 2 19:09:55.982899 kubelet[1417]: E1002 19:09:55.982833 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:56.427248 kubelet[1417]: I1002 19:09:56.427186 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-549k7" nodeCondition=["DiskPressure"] Oct 2 19:09:56.492010 kubelet[1417]: I1002 19:09:56.491937 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v4sgl" nodeCondition=["DiskPressure"] Oct 2 19:09:56.524095 kubelet[1417]: I1002 19:09:56.524029 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rmmkl" nodeCondition=["DiskPressure"] Oct 2 19:09:56.561851 kubelet[1417]: I1002 19:09:56.561757 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lprls" nodeCondition=["DiskPressure"] Oct 2 19:09:56.590891 kubelet[1417]: I1002 19:09:56.590829 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xhbfb" nodeCondition=["DiskPressure"] Oct 2 19:09:56.629859 kubelet[1417]: I1002 19:09:56.629788 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7bhrg" nodeCondition=["DiskPressure"] Oct 2 19:09:56.897155 kubelet[1417]: I1002 19:09:56.897078 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jgmg6" nodeCondition=["DiskPressure"] Oct 2 19:09:56.983801 kubelet[1417]: E1002 19:09:56.983720 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:57.248846 kubelet[1417]: I1002 19:09:57.248660 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v9mkl" nodeCondition=["DiskPressure"] Oct 2 19:09:57.411680 kubelet[1417]: I1002 19:09:57.382260 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jp9vp" nodeCondition=["DiskPressure"] Oct 2 19:09:57.481519 kubelet[1417]: I1002 19:09:57.481437 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nl6wr" nodeCondition=["DiskPressure"] Oct 2 19:09:57.592718 kubelet[1417]: I1002 19:09:57.592102 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pl85w" nodeCondition=["DiskPressure"] Oct 2 19:09:57.667347 kubelet[1417]: I1002 19:09:57.667242 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xl74x" nodeCondition=["DiskPressure"] Oct 2 19:09:57.740046 kubelet[1417]: I1002 19:09:57.739975 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m89hs" nodeCondition=["DiskPressure"] Oct 2 19:09:57.803256 kubelet[1417]: I1002 19:09:57.802886 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8m5wv" nodeCondition=["DiskPressure"] Oct 2 19:09:57.892889 kubelet[1417]: I1002 19:09:57.892633 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w2qgl" nodeCondition=["DiskPressure"] Oct 2 19:09:57.945878 kubelet[1417]: I1002 19:09:57.945802 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hzdmz" nodeCondition=["DiskPressure"] Oct 2 19:09:57.984289 kubelet[1417]: E1002 19:09:57.984241 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:58.070848 kubelet[1417]: I1002 19:09:58.070793 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qwdqr" nodeCondition=["DiskPressure"] Oct 2 19:09:58.199273 kubelet[1417]: I1002 19:09:58.189060 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8swm7" nodeCondition=["DiskPressure"] Oct 2 19:09:58.299706 kubelet[1417]: I1002 19:09:58.299533 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tx2dq" nodeCondition=["DiskPressure"] Oct 2 19:09:58.425404 kubelet[1417]: I1002 19:09:58.425330 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lhmxs" nodeCondition=["DiskPressure"] Oct 2 19:09:58.521275 kubelet[1417]: I1002 19:09:58.520946 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tbthc" nodeCondition=["DiskPressure"] Oct 2 19:09:58.721973 kubelet[1417]: I1002 19:09:58.721909 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tfqtf" nodeCondition=["DiskPressure"] Oct 2 19:09:58.831914 kubelet[1417]: I1002 19:09:58.831865 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kw5v5" nodeCondition=["DiskPressure"] Oct 2 19:09:58.858035 kubelet[1417]: E1002 19:09:58.857975 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:58.913159 kubelet[1417]: I1002 19:09:58.913093 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kp8zh" nodeCondition=["DiskPressure"] Oct 2 19:09:58.986704 kubelet[1417]: E1002 19:09:58.985138 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:09:59.102914 kubelet[1417]: E1002 19:09:59.102807 1417 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-2ckzv" podUID="20101097-40e7-4d0a-a992-23f4379dc0f4" Oct 2 19:09:59.974874 kubelet[1417]: I1002 19:09:59.974786 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m7f2s" nodeCondition=["DiskPressure"] Oct 2 19:09:59.987076 kubelet[1417]: E1002 19:09:59.987017 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:00.267846 kubelet[1417]: I1002 19:10:00.267580 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6bll4" nodeCondition=["DiskPressure"] Oct 2 19:10:00.472399 kubelet[1417]: I1002 19:10:00.472332 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cmb76" nodeCondition=["DiskPressure"] Oct 2 19:10:00.550027 kubelet[1417]: I1002 19:10:00.549811 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zpwl6" nodeCondition=["DiskPressure"] Oct 2 19:10:00.601912 kubelet[1417]: I1002 19:10:00.601838 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5b4gg" nodeCondition=["DiskPressure"] Oct 2 19:10:00.655717 kubelet[1417]: I1002 19:10:00.653013 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gcz79" nodeCondition=["DiskPressure"] Oct 2 19:10:00.778678 kubelet[1417]: I1002 19:10:00.778598 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tz6jb" nodeCondition=["DiskPressure"] Oct 2 19:10:00.988188 kubelet[1417]: E1002 19:10:00.988130 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:01.088427 kubelet[1417]: I1002 19:10:01.088361 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sqxs9" nodeCondition=["DiskPressure"] Oct 2 19:10:01.121352 kubelet[1417]: I1002 19:10:01.121194 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5nfrk" nodeCondition=["DiskPressure"] Oct 2 19:10:01.159635 kubelet[1417]: I1002 19:10:01.159561 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2f9r9" nodeCondition=["DiskPressure"] Oct 2 19:10:01.205008 kubelet[1417]: I1002 19:10:01.204944 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gsvh6" nodeCondition=["DiskPressure"] Oct 2 19:10:01.235993 kubelet[1417]: I1002 19:10:01.235828 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pcd8n" nodeCondition=["DiskPressure"] Oct 2 19:10:01.266461 kubelet[1417]: I1002 19:10:01.266292 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kqvqw" nodeCondition=["DiskPressure"] Oct 2 19:10:01.297827 kubelet[1417]: I1002 19:10:01.297755 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x8m7j" nodeCondition=["DiskPressure"] Oct 2 19:10:01.338907 kubelet[1417]: I1002 19:10:01.338790 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hzwrn" nodeCondition=["DiskPressure"] Oct 2 19:10:01.464906 kubelet[1417]: I1002 19:10:01.464836 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6kslj" nodeCondition=["DiskPressure"] Oct 2 19:10:01.566620 kubelet[1417]: I1002 19:10:01.566269 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jkrhb" nodeCondition=["DiskPressure"] Oct 2 19:10:01.680659 kubelet[1417]: I1002 19:10:01.680232 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-httxk" nodeCondition=["DiskPressure"] Oct 2 19:10:01.782726 kubelet[1417]: I1002 19:10:01.771678 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-spwt7" nodeCondition=["DiskPressure"] Oct 2 19:10:01.877498 kubelet[1417]: I1002 19:10:01.877424 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ld4qk" nodeCondition=["DiskPressure"] Oct 2 19:10:01.989484 kubelet[1417]: E1002 19:10:01.989316 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:02.021846 kubelet[1417]: I1002 19:10:02.021777 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2dwxj" nodeCondition=["DiskPressure"] Oct 2 19:10:02.168317 kubelet[1417]: I1002 19:10:02.167902 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cqvmj" nodeCondition=["DiskPressure"] Oct 2 19:10:02.227827 kubelet[1417]: I1002 19:10:02.226930 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ft9vj" nodeCondition=["DiskPressure"] Oct 2 19:10:02.319068 kubelet[1417]: I1002 19:10:02.318980 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vp2xm" nodeCondition=["DiskPressure"] Oct 2 19:10:02.412309 kubelet[1417]: I1002 19:10:02.412234 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gftnr" nodeCondition=["DiskPressure"] Oct 2 19:10:02.523108 kubelet[1417]: I1002 19:10:02.522422 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fts5d" nodeCondition=["DiskPressure"] Oct 2 19:10:02.622674 kubelet[1417]: I1002 19:10:02.618107 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ctdm4" nodeCondition=["DiskPressure"] Oct 2 19:10:02.669042 kubelet[1417]: I1002 19:10:02.668971 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dfvwt" nodeCondition=["DiskPressure"] Oct 2 19:10:02.770127 kubelet[1417]: I1002 19:10:02.770034 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-64gm6" nodeCondition=["DiskPressure"] Oct 2 19:10:02.981682 kubelet[1417]: I1002 19:10:02.981596 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vpztm" nodeCondition=["DiskPressure"] Oct 2 19:10:02.989962 kubelet[1417]: E1002 19:10:02.989891 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:03.080791 kubelet[1417]: I1002 19:10:03.080714 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kvhdq" nodeCondition=["DiskPressure"] Oct 2 19:10:03.255045 kubelet[1417]: I1002 19:10:03.254478 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6lcgd" nodeCondition=["DiskPressure"] Oct 2 19:10:03.348693 kubelet[1417]: I1002 19:10:03.347669 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-24xd9" nodeCondition=["DiskPressure"] Oct 2 19:10:03.458110 kubelet[1417]: I1002 19:10:03.458040 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lfkqz" nodeCondition=["DiskPressure"] Oct 2 19:10:03.569283 kubelet[1417]: I1002 19:10:03.569121 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mlxm8" nodeCondition=["DiskPressure"] Oct 2 19:10:03.660321 kubelet[1417]: I1002 19:10:03.660257 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qjcgk" nodeCondition=["DiskPressure"] Oct 2 19:10:03.780758 kubelet[1417]: I1002 19:10:03.780688 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pgsvn" nodeCondition=["DiskPressure"] Oct 2 19:10:03.942322 kubelet[1417]: I1002 19:10:03.942253 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vthdh" nodeCondition=["DiskPressure"] Oct 2 19:10:03.990889 kubelet[1417]: E1002 19:10:03.990848 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:04.101954 kubelet[1417]: I1002 19:10:04.100412 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xm6qd" nodeCondition=["DiskPressure"] Oct 2 19:10:04.101954 kubelet[1417]: E1002 19:10:04.101286 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:10:04.219517 kubelet[1417]: I1002 19:10:04.219362 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bzqdd" nodeCondition=["DiskPressure"] Oct 2 19:10:04.279713 kubelet[1417]: I1002 19:10:04.277885 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bm25r" nodeCondition=["DiskPressure"] Oct 2 19:10:04.381140 kubelet[1417]: I1002 19:10:04.380474 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z6th7" nodeCondition=["DiskPressure"] Oct 2 19:10:04.728706 kubelet[1417]: I1002 19:10:04.728072 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-79mxw" nodeCondition=["DiskPressure"] Oct 2 19:10:04.774829 kubelet[1417]: I1002 19:10:04.774724 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n5j65" nodeCondition=["DiskPressure"] Oct 2 19:10:04.831769 kubelet[1417]: I1002 19:10:04.831702 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-knh9k" nodeCondition=["DiskPressure"] Oct 2 19:10:04.920224 kubelet[1417]: I1002 19:10:04.920129 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9m2ln" nodeCondition=["DiskPressure"] Oct 2 19:10:04.991948 kubelet[1417]: E1002 19:10:04.991721 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:05.021248 kubelet[1417]: I1002 19:10:05.020529 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8prmx" nodeCondition=["DiskPressure"] Oct 2 19:10:05.125830 kubelet[1417]: I1002 19:10:05.125751 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mwdh9" nodeCondition=["DiskPressure"] Oct 2 19:10:05.187385 kubelet[1417]: I1002 19:10:05.187295 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vbsvc" nodeCondition=["DiskPressure"] Oct 2 19:10:05.281593 kubelet[1417]: I1002 19:10:05.280885 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xp9lp" nodeCondition=["DiskPressure"] Oct 2 19:10:05.469897 kubelet[1417]: I1002 19:10:05.462503 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-46d4f" nodeCondition=["DiskPressure"] Oct 2 19:10:05.569848 kubelet[1417]: I1002 19:10:05.569631 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t8wt9" nodeCondition=["DiskPressure"] Oct 2 19:10:05.768562 kubelet[1417]: I1002 19:10:05.768485 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5g7r8" nodeCondition=["DiskPressure"] Oct 2 19:10:05.816978 kubelet[1417]: I1002 19:10:05.816825 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kmg2m" nodeCondition=["DiskPressure"] Oct 2 19:10:05.883262 kubelet[1417]: I1002 19:10:05.883201 1417 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:10:05.883262 kubelet[1417]: I1002 19:10:05.883257 1417 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:10:05.885577 kubelet[1417]: I1002 19:10:05.885544 1417 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:10:05.899110 kubelet[1417]: I1002 19:10:05.899063 1417 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:10:05.899294 kubelet[1417]: I1002 19:10:05.899182 1417 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-2ckzv","kube-system/coredns-5dd5756b68-9jw66","kube-system/coredns-5dd5756b68-8glxb","calico-system/calico-kube-controllers-74b9887bb6-bt4ql","calico-system/calico-node-gv4q6","kube-system/kube-proxy-n7wzf"] Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899222 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-2ckzv" Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899242 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-9jw66" Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899255 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8glxb" Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899266 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-bt4ql" Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899277 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-gv4q6" Oct 2 19:10:05.899294 kubelet[1417]: E1002 19:10:05.899288 1417 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n7wzf" Oct 2 19:10:05.899294 kubelet[1417]: I1002 19:10:05.899301 1417 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:10:05.916920 kubelet[1417]: I1002 19:10:05.916363 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wjcx7" nodeCondition=["DiskPressure"] Oct 2 19:10:05.992465 kubelet[1417]: E1002 19:10:05.992406 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:06.016308 kubelet[1417]: I1002 19:10:06.016231 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ks2fr" nodeCondition=["DiskPressure"] Oct 2 19:10:06.212227 kubelet[1417]: I1002 19:10:06.211566 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wbr7q" nodeCondition=["DiskPressure"] Oct 2 19:10:06.312283 kubelet[1417]: I1002 19:10:06.312217 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-76n4r" nodeCondition=["DiskPressure"] Oct 2 19:10:06.411838 kubelet[1417]: I1002 19:10:06.411750 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vrwf5" nodeCondition=["DiskPressure"] Oct 2 19:10:06.620317 kubelet[1417]: I1002 19:10:06.620243 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hlsrq" nodeCondition=["DiskPressure"] Oct 2 19:10:06.712058 kubelet[1417]: I1002 19:10:06.711982 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g8pvp" nodeCondition=["DiskPressure"] Oct 2 19:10:06.812283 kubelet[1417]: I1002 19:10:06.812230 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xzl4d" nodeCondition=["DiskPressure"] Oct 2 19:10:06.993686 kubelet[1417]: E1002 19:10:06.993487 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:10:07.201777 kubelet[1417]: I1002 19:10:07.201712 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r8ktv" nodeCondition=["DiskPressure"] Oct 2 19:10:07.233677 kubelet[1417]: I1002 19:10:07.233593 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8fzlj" nodeCondition=["DiskPressure"] Oct 2 19:10:07.325794 kubelet[1417]: I1002 19:10:07.325752 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wpxf5" nodeCondition=["DiskPressure"] Oct 2 19:10:07.414206 kubelet[1417]: I1002 19:10:07.414143 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t5vmq" nodeCondition=["DiskPressure"] Oct 2 19:10:07.512994 kubelet[1417]: I1002 19:10:07.512945 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kkwfl" nodeCondition=["DiskPressure"] Oct 2 19:10:07.712873 kubelet[1417]: I1002 19:10:07.712197 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v7srm" nodeCondition=["DiskPressure"] Oct 2 19:10:07.813877 kubelet[1417]: I1002 19:10:07.813762 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xpbdp" nodeCondition=["DiskPressure"] Oct 2 19:10:07.927174 kubelet[1417]: I1002 19:10:07.927086 1417 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jf9bh" nodeCondition=["DiskPressure"] Oct 2 19:10:07.994526 kubelet[1417]: E1002 19:10:07.994381 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"