Oct 2 19:25:53.967338 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:25:53.967365 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:25:53.967376 kernel: BIOS-provided physical RAM map: Oct 2 19:25:53.967384 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:25:53.967392 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:25:53.967399 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:25:53.967409 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:25:53.967417 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:25:53.967426 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:25:53.967434 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:25:53.967442 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:25:53.967449 kernel: NX (Execute Disable) protection: active Oct 2 19:25:53.967457 kernel: SMBIOS 2.8 present. Oct 2 19:25:53.967467 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:25:53.967480 kernel: Hypervisor detected: KVM Oct 2 19:25:53.967498 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:25:53.967506 kernel: kvm-clock: cpu 0, msr 68f8a001, primary cpu clock Oct 2 19:25:53.967514 kernel: kvm-clock: using sched offset of 2747409845 cycles Oct 2 19:25:53.967522 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:25:53.967531 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:25:53.967539 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:25:53.967548 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:25:53.967554 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:25:53.967563 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:25:53.967569 kernel: Using GB pages for direct mapping Oct 2 19:25:53.967576 kernel: ACPI: Early table checksum verification disabled Oct 2 19:25:53.967582 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:25:53.967698 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967704 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967710 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967716 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:25:53.967722 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967730 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967736 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:25:53.967742 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:25:53.967748 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:25:53.967754 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:25:53.967760 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:25:53.967766 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:25:53.967772 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:25:53.967782 kernel: No NUMA configuration found Oct 2 19:25:53.967789 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:25:53.967795 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:25:53.967802 kernel: Zone ranges: Oct 2 19:25:53.967808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:25:53.967815 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:25:53.967822 kernel: Normal empty Oct 2 19:25:53.967829 kernel: Movable zone start for each node Oct 2 19:25:53.967835 kernel: Early memory node ranges Oct 2 19:25:53.967842 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:25:53.967848 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:25:53.967855 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:25:53.967861 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:25:53.967868 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:25:53.967874 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:25:53.967882 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:25:53.967889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:25:53.967895 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:25:53.967902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:25:53.967909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:25:53.967915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:25:53.967957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:25:53.967964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:25:53.967970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:25:53.967978 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:25:53.967985 kernel: TSC deadline timer available Oct 2 19:25:53.967991 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:25:53.967998 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:25:53.968004 kernel: kvm-guest: setup PV sched yield Oct 2 19:25:53.968011 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:25:53.968017 kernel: Booting paravirtualized kernel on KVM Oct 2 19:25:53.968024 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:25:53.968031 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:25:53.968039 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:25:53.968045 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:25:53.968051 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:25:53.968058 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:25:53.968064 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:25:53.968071 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:25:53.968077 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:25:53.968084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:25:53.968090 kernel: Policy zone: DMA32 Oct 2 19:25:53.968099 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:25:53.968106 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:25:53.968113 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:25:53.968119 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:25:53.968126 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:25:53.968133 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:25:53.968140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:25:53.968146 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:25:53.968154 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:25:53.968160 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:25:53.968167 kernel: rcu: RCU event tracing is enabled. Oct 2 19:25:53.968174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:25:53.968181 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:25:53.968187 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:25:53.968194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:25:53.968200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:25:53.968207 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:25:53.968215 kernel: random: crng init done Oct 2 19:25:53.968221 kernel: Console: colour VGA+ 80x25 Oct 2 19:25:53.968228 kernel: printk: console [ttyS0] enabled Oct 2 19:25:53.968234 kernel: ACPI: Core revision 20210730 Oct 2 19:25:53.968241 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:25:53.968248 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:25:53.968254 kernel: x2apic enabled Oct 2 19:25:53.968261 kernel: Switched APIC routing to physical x2apic. Oct 2 19:25:53.968267 kernel: kvm-guest: setup PV IPIs Oct 2 19:25:53.968273 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:25:53.968281 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:25:53.968288 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:25:53.968295 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:25:53.968301 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:25:53.968307 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:25:53.968314 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:25:53.968321 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:25:53.968327 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:25:53.968335 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:25:53.968347 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:25:53.968354 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:25:53.968362 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:25:53.968369 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:25:53.968376 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:25:53.968383 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:25:53.968390 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:25:53.968397 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:25:53.968404 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:25:53.968412 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:25:53.968419 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:25:53.968426 kernel: LSM: Security Framework initializing Oct 2 19:25:53.968433 kernel: SELinux: Initializing. Oct 2 19:25:53.968439 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:25:53.968446 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:25:53.968453 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:25:53.968462 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:25:53.968468 kernel: ... version: 0 Oct 2 19:25:53.968475 kernel: ... bit width: 48 Oct 2 19:25:53.968482 kernel: ... generic registers: 6 Oct 2 19:25:53.968494 kernel: ... value mask: 0000ffffffffffff Oct 2 19:25:53.968501 kernel: ... max period: 00007fffffffffff Oct 2 19:25:53.968508 kernel: ... fixed-purpose events: 0 Oct 2 19:25:53.968515 kernel: ... event mask: 000000000000003f Oct 2 19:25:53.968522 kernel: signal: max sigframe size: 1776 Oct 2 19:25:53.968530 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:25:53.968537 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:25:53.968544 kernel: x86: Booting SMP configuration: Oct 2 19:25:53.968550 kernel: .... node #0, CPUs: #1 Oct 2 19:25:53.968557 kernel: kvm-clock: cpu 1, msr 68f8a041, secondary cpu clock Oct 2 19:25:53.968564 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:25:53.968571 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:25:53.968577 kernel: #2 Oct 2 19:25:53.968585 kernel: kvm-clock: cpu 2, msr 68f8a081, secondary cpu clock Oct 2 19:25:53.968593 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:25:53.968600 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:25:53.968606 kernel: #3 Oct 2 19:25:53.968613 kernel: kvm-clock: cpu 3, msr 68f8a0c1, secondary cpu clock Oct 2 19:25:53.968620 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:25:53.968627 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:25:53.968633 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:25:53.968640 kernel: smpboot: Max logical packages: 1 Oct 2 19:25:53.968647 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:25:53.968654 kernel: devtmpfs: initialized Oct 2 19:25:53.968662 kernel: x86/mm: Memory block size: 128MB Oct 2 19:25:53.968669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:25:53.968676 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:25:53.968683 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:25:53.968690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:25:53.968697 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:25:53.968704 kernel: audit: type=2000 audit(1696274753.597:1): state=initialized audit_enabled=0 res=1 Oct 2 19:25:53.968711 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:25:53.968717 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:25:53.968726 kernel: cpuidle: using governor menu Oct 2 19:25:53.968732 kernel: ACPI: bus type PCI registered Oct 2 19:25:53.968739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:25:53.968746 kernel: dca service started, version 1.12.1 Oct 2 19:25:53.968753 kernel: PCI: Using configuration type 1 for base access Oct 2 19:25:53.968760 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:25:53.968766 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:25:53.968773 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:25:53.968780 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:25:53.968788 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:25:53.968795 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:25:53.968802 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:25:53.968809 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:25:53.968815 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:25:53.968822 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:25:53.968829 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:25:53.968836 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:25:53.968843 kernel: ACPI: Interpreter enabled Oct 2 19:25:53.968851 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:25:53.968857 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:25:53.968864 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:25:53.968871 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:25:53.968878 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:25:53.969027 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:25:53.969040 kernel: acpiphp: Slot [3] registered Oct 2 19:25:53.969047 kernel: acpiphp: Slot [4] registered Oct 2 19:25:53.969056 kernel: acpiphp: Slot [5] registered Oct 2 19:25:53.969063 kernel: acpiphp: Slot [6] registered Oct 2 19:25:53.969069 kernel: acpiphp: Slot [7] registered Oct 2 19:25:53.969076 kernel: acpiphp: Slot [8] registered Oct 2 19:25:53.969083 kernel: acpiphp: Slot [9] registered Oct 2 19:25:53.969090 kernel: acpiphp: Slot [10] registered Oct 2 19:25:53.969096 kernel: acpiphp: Slot [11] registered Oct 2 19:25:53.969103 kernel: acpiphp: Slot [12] registered Oct 2 19:25:53.969110 kernel: acpiphp: Slot [13] registered Oct 2 19:25:53.969118 kernel: acpiphp: Slot [14] registered Oct 2 19:25:53.969125 kernel: acpiphp: Slot [15] registered Oct 2 19:25:53.969131 kernel: acpiphp: Slot [16] registered Oct 2 19:25:53.969138 kernel: acpiphp: Slot [17] registered Oct 2 19:25:53.969145 kernel: acpiphp: Slot [18] registered Oct 2 19:25:53.969152 kernel: acpiphp: Slot [19] registered Oct 2 19:25:53.969158 kernel: acpiphp: Slot [20] registered Oct 2 19:25:53.969165 kernel: acpiphp: Slot [21] registered Oct 2 19:25:53.969172 kernel: acpiphp: Slot [22] registered Oct 2 19:25:53.969178 kernel: acpiphp: Slot [23] registered Oct 2 19:25:53.969186 kernel: acpiphp: Slot [24] registered Oct 2 19:25:53.969193 kernel: acpiphp: Slot [25] registered Oct 2 19:25:53.969200 kernel: acpiphp: Slot [26] registered Oct 2 19:25:53.969207 kernel: acpiphp: Slot [27] registered Oct 2 19:25:53.969213 kernel: acpiphp: Slot [28] registered Oct 2 19:25:53.969220 kernel: acpiphp: Slot [29] registered Oct 2 19:25:53.969227 kernel: acpiphp: Slot [30] registered Oct 2 19:25:53.969233 kernel: acpiphp: Slot [31] registered Oct 2 19:25:53.969240 kernel: PCI host bridge to bus 0000:00 Oct 2 19:25:53.969346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:25:53.969455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:25:53.969532 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:25:53.969598 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:25:53.969677 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:25:53.969792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:25:53.971049 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:25:53.971170 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:25:53.971264 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:25:53.971422 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:25:53.971518 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:25:53.971603 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:25:53.971683 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:25:53.971761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:25:53.971868 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:25:53.971993 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:25:53.972080 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:25:53.972898 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:25:53.973055 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:25:53.973184 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:25:53.973528 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:25:53.973620 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:25:53.973736 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:25:53.973819 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:25:53.973906 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:25:53.974006 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:25:53.974104 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:25:53.974191 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:25:53.974272 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:25:53.974353 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:25:53.974451 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:25:53.974540 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:25:53.974614 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:25:53.974691 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:25:53.974778 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:25:53.974788 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:25:53.974796 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:25:53.974803 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:25:53.974810 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:25:53.974817 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:25:53.974824 kernel: iommu: Default domain type: Translated Oct 2 19:25:53.974831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:25:53.974909 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:25:53.974997 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:25:53.975070 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:25:53.975079 kernel: vgaarb: loaded Oct 2 19:25:53.975086 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:25:53.975093 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:25:53.975102 kernel: PTP clock support registered Oct 2 19:25:53.975112 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:25:53.975121 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:25:53.975134 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:25:53.975143 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:25:53.975152 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:25:53.975161 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:25:53.975171 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:25:53.975180 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:25:53.975187 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:25:53.975194 kernel: pnp: PnP ACPI init Oct 2 19:25:53.975291 kernel: pnp 00:02: [dma 2] Oct 2 19:25:53.975305 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:25:53.975312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:25:53.975319 kernel: NET: Registered PF_INET protocol family Oct 2 19:25:53.975326 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:25:53.975335 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:25:53.975344 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:25:53.975353 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:25:53.975361 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:25:53.975372 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:25:53.975380 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:25:53.975388 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:25:53.975396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:25:53.975404 kernel: NET: Registered PF_XDP protocol family Oct 2 19:25:53.975493 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:25:53.975570 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:25:53.975641 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:25:53.975827 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:25:53.975901 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:25:53.975998 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:25:53.976084 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:25:53.976193 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:25:53.976206 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:25:53.976215 kernel: Initialise system trusted keyrings Oct 2 19:25:53.976224 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:25:53.976238 kernel: Key type asymmetric registered Oct 2 19:25:53.976248 kernel: Asymmetric key parser 'x509' registered Oct 2 19:25:53.976258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:25:53.976265 kernel: io scheduler mq-deadline registered Oct 2 19:25:53.976272 kernel: io scheduler kyber registered Oct 2 19:25:53.976279 kernel: io scheduler bfq registered Oct 2 19:25:53.976286 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:25:53.976294 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:25:53.976301 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:25:53.976309 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:25:53.976318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:25:53.976325 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:25:53.976332 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:25:53.976339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:25:53.976347 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:25:53.976443 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:25:53.976454 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:25:53.976531 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:25:53.976603 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:25:53 UTC (1696274753) Oct 2 19:25:53.976676 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:25:53.976686 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:25:53.976693 kernel: Segment Routing with IPv6 Oct 2 19:25:53.976700 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:25:53.976707 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:25:53.976715 kernel: Key type dns_resolver registered Oct 2 19:25:53.976724 kernel: IPI shorthand broadcast: enabled Oct 2 19:25:53.976733 kernel: sched_clock: Marking stable (403033041, 70827558)->(522698626, -48838027) Oct 2 19:25:53.976745 kernel: registered taskstats version 1 Oct 2 19:25:53.976754 kernel: Loading compiled-in X.509 certificates Oct 2 19:25:53.976763 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:25:53.976772 kernel: Key type .fscrypt registered Oct 2 19:25:53.976781 kernel: Key type fscrypt-provisioning registered Oct 2 19:25:53.976791 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:25:53.976801 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:25:53.976809 kernel: ima: No architecture policies found Oct 2 19:25:53.976819 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:25:53.976826 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:25:53.976833 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:25:53.976840 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:25:53.976847 kernel: Run /init as init process Oct 2 19:25:53.976854 kernel: with arguments: Oct 2 19:25:53.976861 kernel: /init Oct 2 19:25:53.976868 kernel: with environment: Oct 2 19:25:53.976884 kernel: HOME=/ Oct 2 19:25:53.976893 kernel: TERM=linux Oct 2 19:25:53.976901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:25:53.976910 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:25:53.976947 systemd[1]: Detected virtualization kvm. Oct 2 19:25:53.976956 systemd[1]: Detected architecture x86-64. Oct 2 19:25:53.976965 systemd[1]: Running in initrd. Oct 2 19:25:53.976976 systemd[1]: No hostname configured, using default hostname. Oct 2 19:25:53.976989 systemd[1]: Hostname set to . Oct 2 19:25:53.976997 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:25:53.977005 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:25:53.977012 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:25:53.977020 systemd[1]: Reached target cryptsetup.target. Oct 2 19:25:53.977027 systemd[1]: Reached target paths.target. Oct 2 19:25:53.977035 systemd[1]: Reached target slices.target. Oct 2 19:25:53.977042 systemd[1]: Reached target swap.target. Oct 2 19:25:53.977050 systemd[1]: Reached target timers.target. Oct 2 19:25:53.977061 systemd[1]: Listening on iscsid.socket. Oct 2 19:25:53.977071 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:25:53.977081 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:25:53.977092 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:25:53.977103 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:25:53.977112 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:25:53.977123 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:25:53.977136 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:25:53.977144 systemd[1]: Reached target sockets.target. Oct 2 19:25:53.977151 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:25:53.977159 systemd[1]: Finished network-cleanup.service. Oct 2 19:25:53.977167 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:25:53.977175 systemd[1]: Starting systemd-journald.service... Oct 2 19:25:53.977183 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:25:53.977192 systemd[1]: Starting systemd-resolved.service... Oct 2 19:25:53.977200 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:25:53.977208 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:25:53.977215 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:25:53.977224 kernel: audit: type=1130 audit(1696274753.969:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:53.977231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:25:53.977243 systemd-journald[197]: Journal started Oct 2 19:25:53.977294 systemd-journald[197]: Runtime Journal (/run/log/journal/a4a232194b384a8aac06d9b761f1ca2a) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:25:53.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:53.970299 systemd-modules-load[198]: Inserted module 'overlay' Oct 2 19:25:53.990225 systemd-resolved[199]: Positive Trust Anchors: Oct 2 19:25:54.011332 systemd[1]: Started systemd-journald.service. Oct 2 19:25:54.011354 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:25:53.990236 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:25:54.014313 kernel: audit: type=1130 audit(1696274754.011:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:53.990262 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:25:54.022812 kernel: audit: type=1130 audit(1696274754.014:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.022830 kernel: audit: type=1130 audit(1696274754.017:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:53.992452 systemd-resolved[199]: Defaulting to hostname 'linux'. Oct 2 19:25:54.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.012165 systemd[1]: Started systemd-resolved.service. Oct 2 19:25:54.026787 kernel: audit: type=1130 audit(1696274754.022:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.015511 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:25:54.017997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:25:54.023199 systemd[1]: Reached target nss-lookup.target. Oct 2 19:25:54.026959 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:25:54.031445 systemd-modules-load[198]: Inserted module 'br_netfilter' Oct 2 19:25:54.032034 kernel: Bridge firewalling registered Oct 2 19:25:54.039038 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:25:54.042392 kernel: audit: type=1130 audit(1696274754.038:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.040040 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:25:54.048270 dracut-cmdline[214]: dracut-dracut-053 Oct 2 19:25:54.050673 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:25:54.053610 kernel: SCSI subsystem initialized Oct 2 19:25:54.064605 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:25:54.064677 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:25:54.064689 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:25:54.067263 systemd-modules-load[198]: Inserted module 'dm_multipath' Oct 2 19:25:54.068691 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:25:54.072381 kernel: audit: type=1130 audit(1696274754.068:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.069655 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:25:54.079429 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:25:54.082457 kernel: audit: type=1130 audit(1696274754.078:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.111947 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:25:54.122947 kernel: iscsi: registered transport (tcp) Oct 2 19:25:54.142943 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:25:54.143006 kernel: QLogic iSCSI HBA Driver Oct 2 19:25:54.170528 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:25:54.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.171772 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:25:54.174242 kernel: audit: type=1130 audit(1696274754.169:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.218958 kernel: raid6: avx2x4 gen() 29777 MB/s Oct 2 19:25:54.235948 kernel: raid6: avx2x4 xor() 7609 MB/s Oct 2 19:25:54.252957 kernel: raid6: avx2x2 gen() 32163 MB/s Oct 2 19:25:54.269943 kernel: raid6: avx2x2 xor() 19326 MB/s Oct 2 19:25:54.286946 kernel: raid6: avx2x1 gen() 26620 MB/s Oct 2 19:25:54.303944 kernel: raid6: avx2x1 xor() 15368 MB/s Oct 2 19:25:54.320952 kernel: raid6: sse2x4 gen() 13886 MB/s Oct 2 19:25:54.337948 kernel: raid6: sse2x4 xor() 6937 MB/s Oct 2 19:25:54.354949 kernel: raid6: sse2x2 gen() 15490 MB/s Oct 2 19:25:54.371945 kernel: raid6: sse2x2 xor() 9552 MB/s Oct 2 19:25:54.388954 kernel: raid6: sse2x1 gen() 12297 MB/s Oct 2 19:25:54.406334 kernel: raid6: sse2x1 xor() 7557 MB/s Oct 2 19:25:54.406382 kernel: raid6: using algorithm avx2x2 gen() 32163 MB/s Oct 2 19:25:54.406396 kernel: raid6: .... xor() 19326 MB/s, rmw enabled Oct 2 19:25:54.406409 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:25:54.417948 kernel: xor: automatically using best checksumming function avx Oct 2 19:25:54.506955 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:25:54.515724 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:25:54.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.516000 audit: BPF prog-id=7 op=LOAD Oct 2 19:25:54.516000 audit: BPF prog-id=8 op=LOAD Oct 2 19:25:54.517240 systemd[1]: Starting systemd-udevd.service... Oct 2 19:25:54.534525 systemd-udevd[399]: Using default interface naming scheme 'v252'. Oct 2 19:25:54.538712 systemd[1]: Started systemd-udevd.service. Oct 2 19:25:54.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.562523 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:25:54.573001 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Oct 2 19:25:54.597443 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:25:54.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.630176 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:25:54.675719 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:25:54.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:54.698950 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:25:54.700950 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:25:54.710185 kernel: libata version 3.00 loaded. Oct 2 19:25:54.716955 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:25:54.717020 kernel: AES CTR mode by8 optimization enabled Oct 2 19:25:54.718955 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:25:54.720962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:25:54.724994 kernel: scsi host0: ata_piix Oct 2 19:25:54.725202 kernel: scsi host1: ata_piix Oct 2 19:25:54.725307 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:25:54.725318 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:25:54.741937 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Oct 2 19:25:54.747790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:25:54.761492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:25:54.783125 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:25:54.787680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:25:54.793418 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:25:54.796864 systemd[1]: Starting disk-uuid.service... Oct 2 19:25:54.809969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:25:54.814955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:25:54.883113 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:25:54.884943 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:25:54.917008 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:25:54.917209 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:25:54.934958 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:25:55.820944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:25:55.821021 disk-uuid[525]: The operation has completed successfully. Oct 2 19:25:55.845057 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:25:55.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:55.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:55.845159 systemd[1]: Finished disk-uuid.service. Oct 2 19:25:55.854000 systemd[1]: Starting verity-setup.service... Oct 2 19:25:55.867959 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:25:55.898860 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:25:55.901583 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:25:55.903760 systemd[1]: Finished verity-setup.service. Oct 2 19:25:55.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:55.973961 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:25:55.974593 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:25:55.975061 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:25:55.975791 systemd[1]: Starting ignition-setup.service... Oct 2 19:25:55.976969 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:25:55.985343 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:25:55.985388 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:25:55.985403 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:25:55.994266 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:25:56.001883 systemd[1]: Finished ignition-setup.service. Oct 2 19:25:56.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.004202 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:25:56.076445 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:25:56.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.076000 audit: BPF prog-id=9 op=LOAD Oct 2 19:25:56.078446 systemd[1]: Starting systemd-networkd.service... Oct 2 19:25:56.099322 systemd-networkd[704]: lo: Link UP Oct 2 19:25:56.099336 systemd-networkd[704]: lo: Gained carrier Oct 2 19:25:56.099813 systemd-networkd[704]: Enumeration completed Oct 2 19:25:56.099945 systemd[1]: Started systemd-networkd.service. Oct 2 19:25:56.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.100087 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:25:56.101100 systemd[1]: Reached target network.target. Oct 2 19:25:56.101973 systemd-networkd[704]: eth0: Link UP Oct 2 19:25:56.101976 systemd-networkd[704]: eth0: Gained carrier Oct 2 19:25:56.103143 systemd[1]: Starting iscsiuio.service... Oct 2 19:25:56.133298 ignition[621]: Ignition 2.14.0 Oct 2 19:25:56.133316 ignition[621]: Stage: fetch-offline Oct 2 19:25:56.133381 ignition[621]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:25:56.133389 ignition[621]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:25:56.133514 ignition[621]: parsed url from cmdline: "" Oct 2 19:25:56.133517 ignition[621]: no config URL provided Oct 2 19:25:56.133522 ignition[621]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:25:56.133528 ignition[621]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:25:56.133557 ignition[621]: op(1): [started] loading QEMU firmware config module Oct 2 19:25:56.133562 ignition[621]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:25:56.166276 systemd[1]: Started iscsiuio.service. Oct 2 19:25:56.168797 systemd[1]: Starting iscsid.service... Oct 2 19:25:56.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.169435 systemd-networkd[704]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:25:56.172603 iscsid[710]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:25:56.172603 iscsid[710]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:25:56.172603 iscsid[710]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:25:56.172603 iscsid[710]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:25:56.172603 iscsid[710]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:25:56.172603 iscsid[710]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:25:56.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.174085 systemd[1]: Started iscsid.service. Oct 2 19:25:56.176651 ignition[621]: op(1): [finished] loading QEMU firmware config module Oct 2 19:25:56.179722 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:25:56.176675 ignition[621]: QEMU firmware config was not found. Ignoring... Oct 2 19:25:56.192368 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:25:56.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.221248 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:25:56.221327 ignition[621]: parsing config with SHA512: a7bbc62bd6c239637148382628ac949ed28dfd8dd9e062a29a5bb4c4ea610702dcfdc8152d95e2b1b5d21cb41203b1f21ce5ded76fda4f2bb9468af811965695 Oct 2 19:25:56.222584 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:25:56.223938 systemd[1]: Reached target remote-fs.target. Oct 2 19:25:56.225967 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:25:56.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.234236 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:25:56.246590 unknown[621]: fetched base config from "system" Oct 2 19:25:56.246602 unknown[621]: fetched user config from "qemu" Oct 2 19:25:56.247019 ignition[621]: fetch-offline: fetch-offline passed Oct 2 19:25:56.247318 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.19 Oct 2 19:25:56.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.247155 ignition[621]: Ignition finished successfully Oct 2 19:25:56.247328 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Oct 2 19:25:56.248248 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:25:56.249486 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:25:56.250355 systemd[1]: Starting ignition-kargs.service... Oct 2 19:25:56.259386 ignition[725]: Ignition 2.14.0 Oct 2 19:25:56.259396 ignition[725]: Stage: kargs Oct 2 19:25:56.259502 ignition[725]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:25:56.259511 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:25:56.260432 ignition[725]: kargs: kargs passed Oct 2 19:25:56.261762 systemd[1]: Finished ignition-kargs.service. Oct 2 19:25:56.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.260475 ignition[725]: Ignition finished successfully Oct 2 19:25:56.264178 systemd[1]: Starting ignition-disks.service... Oct 2 19:25:56.271225 ignition[731]: Ignition 2.14.0 Oct 2 19:25:56.271235 ignition[731]: Stage: disks Oct 2 19:25:56.271329 ignition[731]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:25:56.271339 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:25:56.272363 ignition[731]: disks: disks passed Oct 2 19:25:56.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.273563 systemd[1]: Finished ignition-disks.service. Oct 2 19:25:56.272404 ignition[731]: Ignition finished successfully Oct 2 19:25:56.274357 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:25:56.275328 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:25:56.275986 systemd[1]: Reached target local-fs.target. Oct 2 19:25:56.276305 systemd[1]: Reached target sysinit.target. Oct 2 19:25:56.276538 systemd[1]: Reached target basic.target. Oct 2 19:25:56.278073 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:25:56.289805 systemd-fsck[739]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:25:56.387757 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:25:56.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.390974 systemd[1]: Mounting sysroot.mount... Oct 2 19:25:56.401942 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:25:56.402094 systemd[1]: Mounted sysroot.mount. Oct 2 19:25:56.403178 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:25:56.405262 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:25:56.406493 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:25:56.406529 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:25:56.406550 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:25:56.410378 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:25:56.412176 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:25:56.417268 initrd-setup-root[749]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:25:56.421795 initrd-setup-root[757]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:25:56.424954 initrd-setup-root[765]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:25:56.429114 initrd-setup-root[773]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:25:56.461624 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:25:56.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.463639 systemd[1]: Starting ignition-mount.service... Oct 2 19:25:56.465219 systemd[1]: Starting sysroot-boot.service... Oct 2 19:25:56.469770 bash[790]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:25:56.482752 systemd[1]: Finished sysroot-boot.service. Oct 2 19:25:56.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.484849 ignition[792]: INFO : Ignition 2.14.0 Oct 2 19:25:56.484849 ignition[792]: INFO : Stage: mount Oct 2 19:25:56.486015 ignition[792]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:25:56.486015 ignition[792]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:25:56.486015 ignition[792]: INFO : mount: mount passed Oct 2 19:25:56.486015 ignition[792]: INFO : Ignition finished successfully Oct 2 19:25:56.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:56.486668 systemd[1]: Finished ignition-mount.service. Oct 2 19:25:56.914161 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:25:56.921971 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Oct 2 19:25:56.924393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:25:56.924441 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:25:56.924459 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:25:56.928429 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:25:56.932877 systemd[1]: Starting ignition-files.service... Oct 2 19:25:56.948847 ignition[820]: INFO : Ignition 2.14.0 Oct 2 19:25:56.948847 ignition[820]: INFO : Stage: files Oct 2 19:25:56.950798 ignition[820]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:25:56.950798 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:25:56.950798 ignition[820]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:25:56.954669 ignition[820]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:25:56.954669 ignition[820]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:25:56.954669 ignition[820]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:25:56.954669 ignition[820]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:25:56.959008 ignition[820]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:25:56.959008 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:25:56.959008 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:25:56.954669 unknown[820]: wrote ssh authorized keys file for user: core Oct 2 19:25:57.151760 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:25:57.374158 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:25:57.374158 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:25:57.377649 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:25:57.377649 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:25:57.483335 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:25:57.544108 systemd-networkd[704]: eth0: Gained IPv6LL Oct 2 19:25:57.608751 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:25:57.628431 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:25:57.630072 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:25:57.631419 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:25:57.729485 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:25:58.310192 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 19:25:58.312317 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:25:58.312317 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:25:58.312317 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:25:58.374672 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:25:59.849196 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 19:25:59.851354 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:25:59.851354 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:25:59.851354 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:25:59.851354 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:25:59.855561 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:25:59.855561 ignition[820]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:25:59.889896 ignition[820]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:25:59.891337 ignition[820]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:25:59.892620 ignition[820]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:25:59.892620 ignition[820]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:25:59.892620 ignition[820]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:25:59.895854 ignition[820]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:25:59.989941 ignition[820]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:25:59.991162 ignition[820]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:25:59.992065 ignition[820]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:25:59.993268 ignition[820]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:25:59.994438 ignition[820]: INFO : files: files passed Oct 2 19:25:59.994946 ignition[820]: INFO : Ignition finished successfully Oct 2 19:25:59.996814 systemd[1]: Finished ignition-files.service. Oct 2 19:26:00.001153 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 2 19:26:00.001175 kernel: audit: type=1130 audit(1696274759.997:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:59.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:25:59.998371 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:26:00.001502 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:26:00.003735 systemd[1]: Starting ignition-quench.service... Oct 2 19:26:00.006611 initrd-setup-root-after-ignition[845]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:26:00.008004 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:26:00.008728 initrd-setup-root-after-ignition[848]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:26:00.008728 systemd[1]: Finished ignition-quench.service. Oct 2 19:26:00.014496 kernel: audit: type=1130 audit(1696274760.009:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.014513 kernel: audit: type=1131 audit(1696274760.009:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.010530 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:26:00.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.015779 systemd[1]: Reached target ignition-complete.target. Oct 2 19:26:00.018790 kernel: audit: type=1130 audit(1696274760.014:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.019491 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:26:00.031733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:26:00.058434 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:26:00.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.059724 systemd[1]: Reached target initrd-fs.target. Oct 2 19:26:00.064389 kernel: audit: type=1130 audit(1696274760.058:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.064415 kernel: audit: type=1131 audit(1696274760.058:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.063562 systemd[1]: Reached target initrd.target. Oct 2 19:26:00.064682 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:26:00.066204 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:26:00.076718 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:26:00.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.078907 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:26:00.080831 kernel: audit: type=1130 audit(1696274760.076:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.086595 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:26:00.104891 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:26:00.106061 systemd[1]: Stopped target timers.target. Oct 2 19:26:00.107077 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:26:00.107798 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:26:00.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.109035 systemd[1]: Stopped target initrd.target. Oct 2 19:26:00.111739 kernel: audit: type=1131 audit(1696274760.107:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.111755 systemd[1]: Stopped target basic.target. Oct 2 19:26:00.112774 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:26:00.113965 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:26:00.115102 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:26:00.116259 systemd[1]: Stopped target remote-fs.target. Oct 2 19:26:00.117330 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:26:00.118433 systemd[1]: Stopped target sysinit.target. Oct 2 19:26:00.119423 systemd[1]: Stopped target local-fs.target. Oct 2 19:26:00.120478 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:26:00.121557 systemd[1]: Stopped target swap.target. Oct 2 19:26:00.122531 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:26:00.123218 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:26:00.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.124396 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:26:00.127195 kernel: audit: type=1131 audit(1696274760.123:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.127185 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:26:00.127870 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:26:00.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.128981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:26:00.129072 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:26:00.132848 kernel: audit: type=1131 audit(1696274760.127:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.132987 systemd[1]: Stopped target paths.target. Oct 2 19:26:00.133974 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:26:00.139965 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:26:00.158664 systemd[1]: Stopped target slices.target. Oct 2 19:26:00.159683 systemd[1]: Stopped target sockets.target. Oct 2 19:26:00.160787 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:26:00.161599 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:26:00.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.162943 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:26:00.163590 systemd[1]: Stopped ignition-files.service. Oct 2 19:26:00.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.165557 systemd[1]: Stopping ignition-mount.service... Oct 2 19:26:00.167022 systemd[1]: Stopping iscsid.service... Oct 2 19:26:00.167231 iscsid[710]: iscsid shutting down. Oct 2 19:26:00.169803 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:26:00.171036 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:26:00.171875 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:26:00.173144 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:26:00.173869 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:26:00.177324 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:26:00.178149 systemd[1]: Stopped iscsid.service. Oct 2 19:26:00.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.180980 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:26:00.181088 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:26:00.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.183864 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:26:00.183907 systemd[1]: Closed iscsid.socket. Oct 2 19:26:00.185569 systemd[1]: Stopping iscsiuio.service... Oct 2 19:26:00.187746 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:26:00.190238 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:26:00.190349 systemd[1]: Stopped iscsiuio.service. Oct 2 19:26:00.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.191493 systemd[1]: Stopped target network.target. Oct 2 19:26:00.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.192330 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:26:00.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.196376 ignition[861]: INFO : Ignition 2.14.0 Oct 2 19:26:00.196376 ignition[861]: INFO : Stage: umount Oct 2 19:26:00.196376 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:26:00.196376 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:26:00.196376 ignition[861]: INFO : umount: umount passed Oct 2 19:26:00.196376 ignition[861]: INFO : Ignition finished successfully Oct 2 19:26:00.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.192360 systemd[1]: Closed iscsiuio.socket. Oct 2 19:26:00.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.192632 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:26:00.208000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:26:00.192815 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:26:00.193206 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:26:00.193269 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:26:00.194470 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:26:00.194532 systemd[1]: Stopped ignition-mount.service. Oct 2 19:26:00.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.195220 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:26:00.195255 systemd[1]: Stopped ignition-disks.service. Oct 2 19:26:00.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.195806 systemd-networkd[704]: eth0: DHCPv6 lease lost Oct 2 19:26:00.217000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:26:00.196359 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:26:00.196390 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:26:00.197001 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:26:00.197034 systemd[1]: Stopped ignition-setup.service. Oct 2 19:26:00.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.198718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:26:00.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.198751 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:26:00.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.200104 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:26:00.200179 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:26:00.202227 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:26:00.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.202345 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:26:00.203738 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:26:00.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.203778 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:26:00.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.205749 systemd[1]: Stopping network-cleanup.service... Oct 2 19:26:00.206354 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:26:00.206395 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:26:00.207188 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:26:00.207221 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:26:00.207954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:26:00.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.207986 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:26:00.208775 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:26:00.210700 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:26:00.213612 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:26:00.213685 systemd[1]: Stopped network-cleanup.service. Oct 2 19:26:00.216630 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:26:00.216734 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:26:00.218255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:26:00.218289 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:26:00.256909 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:26:00.256991 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:26:00.257272 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:26:00.257318 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:26:00.257587 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:26:00.257615 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:26:00.257835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:26:00.257864 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:26:00.262362 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:26:00.262619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:26:00.262678 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:26:00.265615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:26:00.265676 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:26:00.266969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:26:00.267052 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:26:00.269316 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:26:00.271709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:26:00.271788 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:26:00.273047 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:26:00.274020 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:26:00.289219 systemd[1]: Switching root. Oct 2 19:26:00.307331 systemd-journald[197]: Journal stopped Oct 2 19:26:03.912480 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Oct 2 19:26:03.912550 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:26:03.912570 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:26:03.912580 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:26:03.912597 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:26:03.912610 kernel: SELinux: policy capability open_perms=1 Oct 2 19:26:03.912620 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:26:03.912629 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:26:03.912639 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:26:03.912651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:26:03.912664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:26:03.912673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:26:03.912685 systemd[1]: Successfully loaded SELinux policy in 37.558ms. Oct 2 19:26:03.912705 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.932ms. Oct 2 19:26:03.912717 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:03.912727 systemd[1]: Detected virtualization kvm. Oct 2 19:26:03.912737 systemd[1]: Detected architecture x86-64. Oct 2 19:26:03.912747 systemd[1]: Detected first boot. Oct 2 19:26:03.912758 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:03.912775 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:26:03.912787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:03.912801 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:03.912813 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:03.912831 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:26:03.912842 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:26:03.912852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:26:03.912862 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:26:03.912873 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:26:03.912885 systemd[1]: Created slice system-getty.slice. Oct 2 19:26:03.912895 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:26:03.912905 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:26:03.912928 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:26:03.912938 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:26:03.912948 systemd[1]: Created slice user.slice. Oct 2 19:26:03.912959 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:03.912969 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:26:03.912979 systemd[1]: Set up automount boot.automount. Oct 2 19:26:03.912991 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:26:03.913001 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:26:03.913012 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:26:03.913022 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:26:03.913032 systemd[1]: Reached target integritysetup.target. Oct 2 19:26:03.913042 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:03.913053 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:03.913069 systemd[1]: Reached target slices.target. Oct 2 19:26:03.913084 systemd[1]: Reached target swap.target. Oct 2 19:26:03.913095 systemd[1]: Reached target torcx.target. Oct 2 19:26:03.913105 systemd[1]: Reached target veritysetup.target. Oct 2 19:26:03.913114 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:26:03.913124 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:26:03.913135 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:03.913145 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:03.913156 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:03.913166 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:26:03.913182 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:26:03.913192 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:26:03.913205 systemd[1]: Mounting media.mount... Oct 2 19:26:03.913221 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:26:03.913231 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:26:03.913241 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:26:03.913251 systemd[1]: Mounting tmp.mount... Oct 2 19:26:03.913261 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:26:03.913272 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:26:03.913284 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:03.913294 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:26:03.913310 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:26:03.913320 systemd[1]: Starting modprobe@drm.service... Oct 2 19:26:03.913330 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:26:03.913340 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:26:03.913350 systemd[1]: Starting modprobe@loop.service... Oct 2 19:26:03.913361 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:26:03.913371 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:26:03.913383 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:26:03.913394 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:26:03.913404 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:26:03.913414 systemd[1]: Stopped systemd-journald.service. Oct 2 19:26:03.913424 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:03.913434 kernel: loop: module loaded Oct 2 19:26:03.913444 kernel: fuse: init (API version 7.34) Oct 2 19:26:03.913453 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:03.913464 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:26:03.913476 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:26:03.913486 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:03.913499 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:26:03.913511 systemd[1]: Stopped verity-setup.service. Oct 2 19:26:03.913521 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:26:03.913536 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:26:03.913546 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:26:03.913555 systemd[1]: Mounted media.mount. Oct 2 19:26:03.913566 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:26:03.913577 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:26:03.913587 systemd[1]: Mounted tmp.mount. Oct 2 19:26:03.913597 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:03.913607 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:26:03.913617 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:26:03.913631 systemd-journald[963]: Journal started Oct 2 19:26:03.913672 systemd-journald[963]: Runtime Journal (/run/log/journal/a4a232194b384a8aac06d9b761f1ca2a) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:26:00.378000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:26:00.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:00.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:00.710000 audit: BPF prog-id=10 op=LOAD Oct 2 19:26:00.710000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:26:00.710000 audit: BPF prog-id=11 op=LOAD Oct 2 19:26:00.710000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:26:03.758000 audit: BPF prog-id=12 op=LOAD Oct 2 19:26:03.758000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:26:03.758000 audit: BPF prog-id=13 op=LOAD Oct 2 19:26:03.758000 audit: BPF prog-id=14 op=LOAD Oct 2 19:26:03.759000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:26:03.759000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:26:03.759000 audit: BPF prog-id=15 op=LOAD Oct 2 19:26:03.759000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:26:03.759000 audit: BPF prog-id=16 op=LOAD Oct 2 19:26:03.759000 audit: BPF prog-id=17 op=LOAD Oct 2 19:26:03.759000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:26:03.759000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:26:03.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.776000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:26:03.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.880000 audit: BPF prog-id=18 op=LOAD Oct 2 19:26:03.880000 audit: BPF prog-id=19 op=LOAD Oct 2 19:26:03.880000 audit: BPF prog-id=20 op=LOAD Oct 2 19:26:03.881000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:26:03.881000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:26:03.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.910000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:26:03.910000 audit[963]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffed42a7210 a2=4000 a3=7ffed42a72ac items=0 ppid=1 pid=963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:03.910000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:26:03.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.757705 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:26:00.779036 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:03.757721 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:26:00.779354 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:03.761088 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:26:00.779376 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:00.779417 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:26:00.779428 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:26:00.779470 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:26:00.779485 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:26:00.779793 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:26:00.779848 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:00.779866 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:03.916015 systemd[1]: Started systemd-journald.service. Oct 2 19:26:00.780365 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:26:00.780415 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:26:03.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:00.780438 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:26:00.780457 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:26:00.780477 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:26:00.780494 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:26:03.445310 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:03.445599 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:03.916606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:26:03.445711 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:03.445903 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:03.445974 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:26:03.446045 /usr/lib/systemd/system-generators/torcx-generator[893]: time="2023-10-02T19:26:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:26:03.916782 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:26:03.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.917757 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:26:03.917938 systemd[1]: Finished modprobe@drm.service. Oct 2 19:26:03.918803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:26:03.918960 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:26:03.920053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:26:03.920255 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:26:03.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.921671 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:26:03.921896 systemd[1]: Finished modprobe@loop.service. Oct 2 19:26:03.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.922936 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:03.924023 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:26:03.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.925052 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:26:03.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:03.926144 systemd[1]: Reached target network-pre.target. Oct 2 19:26:03.928247 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:26:03.944377 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:26:03.945320 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:26:03.946902 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:26:03.949060 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:26:04.000043 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:26:04.001382 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:26:04.002062 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:26:04.003083 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:04.007024 systemd-journald[963]: Time spent on flushing to /var/log/journal/a4a232194b384a8aac06d9b761f1ca2a is 52.444ms for 1089 entries. Oct 2 19:26:04.007024 systemd-journald[963]: System Journal (/var/log/journal/a4a232194b384a8aac06d9b761f1ca2a) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:26:04.702660 systemd-journald[963]: Received client request to flush runtime journal. Oct 2 19:26:04.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.005914 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:26:04.006638 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:26:04.008135 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:26:04.703567 udevadm[997]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:26:04.009871 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:26:04.014361 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:04.016018 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:26:04.016777 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:04.147995 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:26:04.149722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:26:04.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:04.165329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:26:04.327869 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:26:04.350842 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:26:04.704035 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:26:05.045820 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:26:05.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.047064 kernel: kauditd_printk_skb: 92 callbacks suppressed Oct 2 19:26:05.047115 kernel: audit: type=1130 audit(1696274765.045:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.048000 audit: BPF prog-id=21 op=LOAD Oct 2 19:26:05.050254 kernel: audit: type=1334 audit(1696274765.048:136): prog-id=21 op=LOAD Oct 2 19:26:05.050301 kernel: audit: type=1334 audit(1696274765.049:137): prog-id=22 op=LOAD Oct 2 19:26:05.049000 audit: BPF prog-id=22 op=LOAD Oct 2 19:26:05.050975 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:05.051077 kernel: audit: type=1334 audit(1696274765.049:138): prog-id=7 op=UNLOAD Oct 2 19:26:05.051111 kernel: audit: type=1334 audit(1696274765.049:139): prog-id=8 op=UNLOAD Oct 2 19:26:05.049000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:26:05.049000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:26:05.069182 systemd-udevd[1001]: Using default interface naming scheme 'v252'. Oct 2 19:26:05.085706 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:05.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.089783 kernel: audit: type=1130 audit(1696274765.086:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.087000 audit: BPF prog-id=23 op=LOAD Oct 2 19:26:05.090742 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:05.091935 kernel: audit: type=1334 audit(1696274765.087:141): prog-id=23 op=LOAD Oct 2 19:26:05.098886 kernel: audit: type=1334 audit(1696274765.095:142): prog-id=24 op=LOAD Oct 2 19:26:05.098976 kernel: audit: type=1334 audit(1696274765.096:143): prog-id=25 op=LOAD Oct 2 19:26:05.099007 kernel: audit: type=1334 audit(1696274765.097:144): prog-id=26 op=LOAD Oct 2 19:26:05.095000 audit: BPF prog-id=24 op=LOAD Oct 2 19:26:05.096000 audit: BPF prog-id=25 op=LOAD Oct 2 19:26:05.097000 audit: BPF prog-id=26 op=LOAD Oct 2 19:26:05.098845 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:26:05.117128 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:26:05.131687 systemd[1]: Started systemd-userdbd.service. Oct 2 19:26:05.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.154229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:05.177000 audit[1014]: AVC avc: denied { confidentiality } for pid=1014 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:26:05.177000 audit[1014]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5558928425a0 a1=32194 a2=7f36ab83fbc5 a3=5 items=106 ppid=1001 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:05.177000 audit: CWD cwd="/" Oct 2 19:26:05.177000 audit: PATH item=0 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=1 name=(null) inode=13946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=2 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=3 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=4 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=5 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=6 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=7 name=(null) inode=13949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=8 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=9 name=(null) inode=13950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=10 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=11 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=12 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.183052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:26:05.177000 audit: PATH item=13 name=(null) inode=13952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=14 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=15 name=(null) inode=13953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=16 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=17 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=18 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=19 name=(null) inode=13955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=20 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=21 name=(null) inode=13956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=22 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=23 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=24 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=25 name=(null) inode=13958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=26 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=27 name=(null) inode=13959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=28 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=29 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=30 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=31 name=(null) inode=13961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=32 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=33 name=(null) inode=13962 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=34 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=35 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=36 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=37 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=38 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=39 name=(null) inode=13965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=40 name=(null) inode=13945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=41 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=42 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=43 name=(null) inode=13967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=44 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=45 name=(null) inode=13968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=46 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=47 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=48 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=49 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=50 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=51 name=(null) inode=13971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=52 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=53 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=54 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=55 name=(null) inode=13973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=56 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=57 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=58 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=59 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=60 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=61 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=62 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=63 name=(null) inode=13977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=64 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=65 name=(null) inode=13978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=66 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=67 name=(null) inode=13979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=68 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=69 name=(null) inode=13980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=70 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=71 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=72 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=73 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=74 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=75 name=(null) inode=13983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=76 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=77 name=(null) inode=13984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=78 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=79 name=(null) inode=13985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=80 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=81 name=(null) inode=13986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=82 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=83 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=84 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=85 name=(null) inode=13988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=86 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=87 name=(null) inode=13989 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=88 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=89 name=(null) inode=13990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=90 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=91 name=(null) inode=13991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=92 name=(null) inode=13987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=93 name=(null) inode=13992 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=94 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=95 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=96 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=97 name=(null) inode=13994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=98 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=99 name=(null) inode=13995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=100 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=101 name=(null) inode=13996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=102 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=103 name=(null) inode=13997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=104 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PATH item=105 name=(null) inode=13998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:05.177000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:26:05.187961 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:26:05.192935 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:26:05.199937 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:26:05.206946 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:26:05.209972 systemd-networkd[1011]: lo: Link UP Oct 2 19:26:05.209987 systemd-networkd[1011]: lo: Gained carrier Oct 2 19:26:05.210646 systemd-networkd[1011]: Enumeration completed Oct 2 19:26:05.210775 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:05.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.212107 systemd-networkd[1011]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:05.213646 systemd-networkd[1011]: eth0: Link UP Oct 2 19:26:05.213658 systemd-networkd[1011]: eth0: Gained carrier Oct 2 19:26:05.244066 systemd-networkd[1011]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:26:05.295973 kernel: kvm: Nested Virtualization enabled Oct 2 19:26:05.296150 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:26:05.311954 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:26:05.334275 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:26:05.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.336000 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:26:05.349367 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:05.377734 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:26:05.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.378504 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:05.380124 systemd[1]: Starting lvm2-activation.service... Oct 2 19:26:05.384331 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:05.413826 systemd[1]: Finished lvm2-activation.service. Oct 2 19:26:05.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.456047 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:05.456956 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:26:05.456982 systemd[1]: Reached target local-fs.target. Oct 2 19:26:05.457517 systemd[1]: Reached target machines.target. Oct 2 19:26:05.459496 systemd[1]: Starting ldconfig.service... Oct 2 19:26:05.460513 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:26:05.460571 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:05.461574 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:26:05.463075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:26:05.464692 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:26:05.465562 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:05.465601 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:05.466504 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:26:05.467928 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1039 (bootctl) Oct 2 19:26:05.470221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:26:05.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.476973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:26:05.478560 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:26:05.479111 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:26:05.480327 systemd-tmpfiles[1042]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:26:05.507826 systemd-fsck[1047]: fsck.fat 4.2 (2021-01-31) Oct 2 19:26:05.507826 systemd-fsck[1047]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:26:05.508815 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:26:05.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:05.542868 systemd[1]: Mounting boot.mount... Oct 2 19:26:05.569052 systemd[1]: Mounted boot.mount. Oct 2 19:26:05.605712 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:26:05.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.312201 systemd-networkd[1011]: eth0: Gained IPv6LL Oct 2 19:26:06.591101 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:26:06.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.593622 systemd[1]: Starting audit-rules.service... Oct 2 19:26:06.595698 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:26:06.601000 audit: BPF prog-id=27 op=LOAD Oct 2 19:26:06.599318 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:26:06.603000 audit: BPF prog-id=28 op=LOAD Oct 2 19:26:06.603125 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:06.605882 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:26:06.608833 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:26:06.611465 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:26:06.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.612480 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:26:06.618000 audit[1063]: SYSTEM_BOOT pid=1063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.621635 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:26:06.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.628111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:26:06.628694 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:26:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.639472 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:26:06.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:06.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:26:06.648000 audit[1072]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdc6ddcc0 a2=420 a3=0 items=0 ppid=1052 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:06.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:26:06.650190 augenrules[1072]: No rules Oct 2 19:26:06.649930 systemd[1]: Finished audit-rules.service. Oct 2 19:26:06.668648 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:26:07.183357 systemd-timesyncd[1062]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:26:07.183393 systemd-timesyncd[1062]: Initial clock synchronization to Mon 2023-10-02 19:26:07.183281 UTC. Oct 2 19:26:07.183650 systemd[1]: Reached target time-set.target. Oct 2 19:26:07.183985 systemd-resolved[1056]: Positive Trust Anchors: Oct 2 19:26:07.183994 systemd-resolved[1056]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:07.184027 systemd-resolved[1056]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:07.201079 systemd-resolved[1056]: Defaulting to hostname 'linux'. Oct 2 19:26:07.202507 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:07.203239 systemd[1]: Reached target network.target. Oct 2 19:26:07.203855 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:07.237644 ldconfig[1038]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:26:07.242564 systemd[1]: Finished ldconfig.service. Oct 2 19:26:07.244667 systemd[1]: Starting systemd-update-done.service... Oct 2 19:26:07.251835 systemd[1]: Finished systemd-update-done.service. Oct 2 19:26:07.252500 systemd[1]: Reached target sysinit.target. Oct 2 19:26:07.253108 systemd[1]: Started motdgen.path. Oct 2 19:26:07.253597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:26:07.254481 systemd[1]: Started logrotate.timer. Oct 2 19:26:07.255185 systemd[1]: Started mdadm.timer. Oct 2 19:26:07.255643 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:26:07.256285 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:26:07.256309 systemd[1]: Reached target paths.target. Oct 2 19:26:07.256807 systemd[1]: Reached target timers.target. Oct 2 19:26:07.257525 systemd[1]: Listening on dbus.socket. Oct 2 19:26:07.258826 systemd[1]: Starting docker.socket... Oct 2 19:26:07.261369 systemd[1]: Listening on sshd.socket. Oct 2 19:26:07.261989 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:07.262310 systemd[1]: Listening on docker.socket. Oct 2 19:26:07.262880 systemd[1]: Reached target sockets.target. Oct 2 19:26:07.263396 systemd[1]: Reached target basic.target. Oct 2 19:26:07.263936 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:07.263959 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:07.264838 systemd[1]: Starting containerd.service... Oct 2 19:26:07.266152 systemd[1]: Starting dbus.service... Oct 2 19:26:07.267445 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:26:07.269112 systemd[1]: Starting extend-filesystems.service... Oct 2 19:26:07.269814 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:26:07.270816 systemd[1]: Starting motdgen.service... Oct 2 19:26:07.271950 jq[1083]: false Oct 2 19:26:07.273517 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:26:07.275609 systemd[1]: Starting prepare-critools.service... Oct 2 19:26:07.277699 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:26:07.279547 systemd[1]: Starting sshd-keygen.service... Oct 2 19:26:07.281094 extend-filesystems[1084]: Found sr0 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda1 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda2 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda3 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found usr Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda4 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda6 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda7 Oct 2 19:26:07.282018 extend-filesystems[1084]: Found vda9 Oct 2 19:26:07.282018 extend-filesystems[1084]: Checking size of /dev/vda9 Oct 2 19:26:07.291982 dbus-daemon[1082]: [system] SELinux support is enabled Oct 2 19:26:07.284936 systemd[1]: Starting systemd-logind.service... Oct 2 19:26:07.287416 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:07.287481 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:26:07.288361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:26:07.290007 systemd[1]: Starting update-engine.service... Oct 2 19:26:07.293780 jq[1103]: true Oct 2 19:26:07.291398 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:26:07.293177 systemd[1]: Started dbus.service. Oct 2 19:26:07.296290 extend-filesystems[1084]: Old size kept for /dev/vda9 Oct 2 19:26:07.297378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:26:07.298087 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:26:07.298410 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:26:07.298534 systemd[1]: Finished extend-filesystems.service. Oct 2 19:26:07.300807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:26:07.300961 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:26:07.305415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:26:07.305458 systemd[1]: Reached target system-config.target. Oct 2 19:26:07.306182 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:26:07.306202 systemd[1]: Reached target user-config.target. Oct 2 19:26:07.311569 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:26:07.362306 tar[1108]: crictl Oct 2 19:26:07.363098 tar[1107]: ./ Oct 2 19:26:07.363098 tar[1107]: ./loopback Oct 2 19:26:07.363283 jq[1109]: true Oct 2 19:26:07.311744 systemd[1]: Finished motdgen.service. Oct 2 19:26:07.393907 systemd-logind[1099]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:26:07.393927 systemd-logind[1099]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:26:07.396242 systemd-logind[1099]: New seat seat0. Oct 2 19:26:07.396732 env[1110]: time="2023-10-02T19:26:07.396388709Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:26:07.405720 systemd[1]: Started systemd-logind.service. Oct 2 19:26:07.414003 bash[1137]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:07.415300 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:26:07.415823 update_engine[1102]: I1002 19:26:07.414326 1102 main.cc:92] Flatcar Update Engine starting Oct 2 19:26:07.418842 systemd[1]: Started update-engine.service. Oct 2 19:26:07.418922 update_engine[1102]: I1002 19:26:07.418913 1102 update_check_scheduler.cc:74] Next update check in 5m15s Oct 2 19:26:07.421071 systemd[1]: Started locksmithd.service. Oct 2 19:26:07.426550 env[1110]: time="2023-10-02T19:26:07.426512953Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:26:07.426669 env[1110]: time="2023-10-02T19:26:07.426646924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.427891 env[1110]: time="2023-10-02T19:26:07.427863175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:07.427891 env[1110]: time="2023-10-02T19:26:07.427888322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428227 env[1110]: time="2023-10-02T19:26:07.428197091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428227 env[1110]: time="2023-10-02T19:26:07.428224402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428288 env[1110]: time="2023-10-02T19:26:07.428241224Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:26:07.428288 env[1110]: time="2023-10-02T19:26:07.428254238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428338 env[1110]: time="2023-10-02T19:26:07.428330451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428649 env[1110]: time="2023-10-02T19:26:07.428625795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428911 env[1110]: time="2023-10-02T19:26:07.428887506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:07.428911 env[1110]: time="2023-10-02T19:26:07.428909277Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:26:07.429007 env[1110]: time="2023-10-02T19:26:07.428974819Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:26:07.429058 env[1110]: time="2023-10-02T19:26:07.429010687Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:26:07.433639 tar[1107]: ./bandwidth Oct 2 19:26:07.435124 env[1110]: time="2023-10-02T19:26:07.435099045Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:26:07.435169 env[1110]: time="2023-10-02T19:26:07.435128410Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:26:07.435169 env[1110]: time="2023-10-02T19:26:07.435141073Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435167333Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435179045Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435190216Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435201196Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435212808Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435228 env[1110]: time="2023-10-02T19:26:07.435224229Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435333 env[1110]: time="2023-10-02T19:26:07.435235741Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435333 env[1110]: time="2023-10-02T19:26:07.435258063Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435333 env[1110]: time="2023-10-02T19:26:07.435268643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:26:07.435385 env[1110]: time="2023-10-02T19:26:07.435353923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:26:07.435443 env[1110]: time="2023-10-02T19:26:07.435423784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:26:07.435653 env[1110]: time="2023-10-02T19:26:07.435632535Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:26:07.435699 env[1110]: time="2023-10-02T19:26:07.435659035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435699 env[1110]: time="2023-10-02T19:26:07.435670476Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:26:07.435738 env[1110]: time="2023-10-02T19:26:07.435708417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435738 env[1110]: time="2023-10-02T19:26:07.435719648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435738 env[1110]: time="2023-10-02T19:26:07.435730058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435810 env[1110]: time="2023-10-02T19:26:07.435739836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435810 env[1110]: time="2023-10-02T19:26:07.435751158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435810 env[1110]: time="2023-10-02T19:26:07.435763200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435810 env[1110]: time="2023-10-02T19:26:07.435774531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435810 env[1110]: time="2023-10-02T19:26:07.435798176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435909 env[1110]: time="2023-10-02T19:26:07.435811160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:26:07.435932 env[1110]: time="2023-10-02T19:26:07.435918201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435954 env[1110]: time="2023-10-02T19:26:07.435931846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435954 env[1110]: time="2023-10-02T19:26:07.435942747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.435999 env[1110]: time="2023-10-02T19:26:07.435952365Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:26:07.435999 env[1110]: time="2023-10-02T19:26:07.435964989Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:26:07.435999 env[1110]: time="2023-10-02T19:26:07.435974246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:26:07.436057 env[1110]: time="2023-10-02T19:26:07.435996678Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:26:07.436057 env[1110]: time="2023-10-02T19:26:07.436027085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:26:07.436236 env[1110]: time="2023-10-02T19:26:07.436189920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.436238071Z" level=info msg="Connect containerd service" Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.436268057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.436690529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.436965244Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.437005590Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:26:07.437965 env[1110]: time="2023-10-02T19:26:07.437043491Z" level=info msg="containerd successfully booted in 0.047070s" Oct 2 19:26:07.437104 systemd[1]: Started containerd.service. Oct 2 19:26:07.444327 env[1110]: time="2023-10-02T19:26:07.444291674Z" level=info msg="Start subscribing containerd event" Oct 2 19:26:07.444432 env[1110]: time="2023-10-02T19:26:07.444415857Z" level=info msg="Start recovering state" Oct 2 19:26:07.444506 env[1110]: time="2023-10-02T19:26:07.444486219Z" level=info msg="Start event monitor" Oct 2 19:26:07.444554 env[1110]: time="2023-10-02T19:26:07.444510775Z" level=info msg="Start snapshots syncer" Oct 2 19:26:07.444554 env[1110]: time="2023-10-02T19:26:07.444519231Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:26:07.444554 env[1110]: time="2023-10-02T19:26:07.444529410Z" level=info msg="Start streaming server" Oct 2 19:26:07.533853 tar[1107]: ./ptp Oct 2 19:26:07.574273 tar[1107]: ./vlan Oct 2 19:26:07.600185 locksmithd[1139]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:26:07.618184 tar[1107]: ./host-device Oct 2 19:26:07.649697 tar[1107]: ./tuning Oct 2 19:26:07.678548 tar[1107]: ./vrf Oct 2 19:26:07.717394 tar[1107]: ./sbr Oct 2 19:26:07.753076 tar[1107]: ./tap Oct 2 19:26:07.784678 tar[1107]: ./dhcp Oct 2 19:26:07.899298 tar[1107]: ./static Oct 2 19:26:07.921481 tar[1107]: ./firewall Oct 2 19:26:07.960881 tar[1107]: ./macvlan Oct 2 19:26:07.970909 systemd[1]: Finished prepare-critools.service. Oct 2 19:26:07.992219 tar[1107]: ./dummy Oct 2 19:26:08.020836 tar[1107]: ./bridge Oct 2 19:26:08.053010 tar[1107]: ./ipvlan Oct 2 19:26:08.082374 tar[1107]: ./portmap Oct 2 19:26:08.110424 tar[1107]: ./host-local Oct 2 19:26:08.142516 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:26:08.627522 sshd_keygen[1114]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:26:08.644583 systemd[1]: Finished sshd-keygen.service. Oct 2 19:26:08.646646 systemd[1]: Starting issuegen.service... Oct 2 19:26:08.651310 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:26:08.651472 systemd[1]: Finished issuegen.service. Oct 2 19:26:08.653265 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:26:08.658076 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:26:08.659844 systemd[1]: Started getty@tty1.service. Oct 2 19:26:08.661454 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:26:08.662232 systemd[1]: Reached target getty.target. Oct 2 19:26:08.662912 systemd[1]: Reached target multi-user.target. Oct 2 19:26:08.664593 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:26:08.670432 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:26:08.670618 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:26:08.671411 systemd[1]: Startup finished in 723ms (kernel) + 6.510s (initrd) + 7.822s (userspace) = 15.055s. Oct 2 19:26:10.717671 systemd[1]: Created slice system-sshd.slice. Oct 2 19:26:10.719004 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:41562.service. Oct 2 19:26:10.762769 sshd[1166]: Accepted publickey for core from 10.0.0.1 port 41562 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:10.764462 sshd[1166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:10.773026 systemd[1]: Created slice user-500.slice. Oct 2 19:26:10.774125 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:26:10.775608 systemd-logind[1099]: New session 1 of user core. Oct 2 19:26:10.782549 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:26:10.784398 systemd[1]: Starting user@500.service... Oct 2 19:26:10.787697 (systemd)[1169]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:10.865100 systemd[1169]: Queued start job for default target default.target. Oct 2 19:26:10.865593 systemd[1169]: Reached target paths.target. Oct 2 19:26:10.865613 systemd[1169]: Reached target sockets.target. Oct 2 19:26:10.865626 systemd[1169]: Reached target timers.target. Oct 2 19:26:10.865637 systemd[1169]: Reached target basic.target. Oct 2 19:26:10.865675 systemd[1169]: Reached target default.target. Oct 2 19:26:10.865697 systemd[1169]: Startup finished in 71ms. Oct 2 19:26:10.865836 systemd[1]: Started user@500.service. Oct 2 19:26:10.866907 systemd[1]: Started session-1.scope. Oct 2 19:26:10.918395 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:41570.service. Oct 2 19:26:10.957416 sshd[1178]: Accepted publickey for core from 10.0.0.1 port 41570 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:10.959160 sshd[1178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:10.962802 systemd-logind[1099]: New session 2 of user core. Oct 2 19:26:10.963709 systemd[1]: Started session-2.scope. Oct 2 19:26:11.018671 sshd[1178]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:11.022002 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:41586.service. Oct 2 19:26:11.022484 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:41570.service: Deactivated successfully. Oct 2 19:26:11.023052 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:26:11.023574 systemd-logind[1099]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:26:11.024390 systemd-logind[1099]: Removed session 2. Oct 2 19:26:11.059020 sshd[1183]: Accepted publickey for core from 10.0.0.1 port 41586 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:11.060177 sshd[1183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:11.063715 systemd-logind[1099]: New session 3 of user core. Oct 2 19:26:11.064543 systemd[1]: Started session-3.scope. Oct 2 19:26:11.113533 sshd[1183]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:11.116102 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:41586.service: Deactivated successfully. Oct 2 19:26:11.116591 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:26:11.117055 systemd-logind[1099]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:26:11.118096 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:41598.service. Oct 2 19:26:11.118778 systemd-logind[1099]: Removed session 3. Oct 2 19:26:11.151857 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:11.152941 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:11.156206 systemd-logind[1099]: New session 4 of user core. Oct 2 19:26:11.157245 systemd[1]: Started session-4.scope. Oct 2 19:26:11.211618 sshd[1190]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:11.214178 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:41598.service: Deactivated successfully. Oct 2 19:26:11.214694 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:26:11.215159 systemd-logind[1099]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:26:11.216149 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:41608.service. Oct 2 19:26:11.216724 systemd-logind[1099]: Removed session 4. Oct 2 19:26:11.250668 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:11.251809 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:11.255016 systemd-logind[1099]: New session 5 of user core. Oct 2 19:26:11.255765 systemd[1]: Started session-5.scope. Oct 2 19:26:11.312408 sudo[1200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:26:11.312570 sudo[1200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:11.318975 dbus-daemon[1082]: \xd0ݝ7\xe2U: received setenforce notice (enforcing=-688276912) Oct 2 19:26:11.320759 sudo[1200]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:11.322577 sshd[1196]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:11.325504 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:41608.service: Deactivated successfully. Oct 2 19:26:11.326028 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:26:11.326478 systemd-logind[1099]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:26:11.327520 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:41614.service. Oct 2 19:26:11.328210 systemd-logind[1099]: Removed session 5. Oct 2 19:26:11.363048 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:11.364415 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:11.368052 systemd-logind[1099]: New session 6 of user core. Oct 2 19:26:11.369140 systemd[1]: Started session-6.scope. Oct 2 19:26:11.422367 sudo[1208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:26:11.422536 sudo[1208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:11.425030 sudo[1208]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:11.429077 sudo[1207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:26:11.429247 sudo[1207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:11.437279 systemd[1]: Stopping audit-rules.service... Oct 2 19:26:11.437000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:26:11.438507 auditctl[1211]: No rules Oct 2 19:26:11.438827 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:26:11.439001 systemd[1]: Stopped audit-rules.service. Oct 2 19:26:11.439068 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:26:11.439097 kernel: audit: type=1305 audit(1696274771.437:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:26:11.437000 audit[1211]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7caaaf10 a2=420 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:11.440306 systemd[1]: Starting audit-rules.service... Oct 2 19:26:11.443321 kernel: audit: type=1300 audit(1696274771.437:163): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7caaaf10 a2=420 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:11.443354 kernel: audit: type=1327 audit(1696274771.437:163): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:26:11.437000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:26:11.444434 kernel: audit: type=1131 audit(1696274771.438:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.454355 augenrules[1228]: No rules Oct 2 19:26:11.454903 systemd[1]: Finished audit-rules.service. Oct 2 19:26:11.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.455641 sudo[1207]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:11.454000 audit[1207]: USER_END pid=1207 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.457895 sshd[1204]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:11.459657 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:41614.service: Deactivated successfully. Oct 2 19:26:11.459974 kernel: audit: type=1130 audit(1696274771.454:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.460016 kernel: audit: type=1106 audit(1696274771.454:166): pid=1207 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.460032 kernel: audit: type=1104 audit(1696274771.454:167): pid=1207 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.454000 audit[1207]: CRED_DISP pid=1207 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.460350 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:26:11.460894 systemd-logind[1099]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:26:11.462161 kernel: audit: type=1106 audit(1696274771.457:168): pid=1204 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.457000 audit[1204]: USER_END pid=1204 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.463872 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:41618.service. Oct 2 19:26:11.464659 systemd-logind[1099]: Removed session 6. Oct 2 19:26:11.465026 kernel: audit: type=1104 audit(1696274771.457:169): pid=1204 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.457000 audit[1204]: CRED_DISP pid=1204 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.467269 kernel: audit: type=1131 audit(1696274771.459:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.19:22-10.0.0.1:41614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.19:22-10.0.0.1:41614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.19:22-10.0.0.1:41618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.497000 audit[1234]: USER_ACCT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.498284 sshd[1234]: Accepted publickey for core from 10.0.0.1 port 41618 ssh2: RSA SHA256:x9xJB2cV8UsO4GVnWDAZ5NHrwqTZPr56IKagATY++jc Oct 2 19:26:11.498000 audit[1234]: CRED_ACQ pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.498000 audit[1234]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe530428c0 a2=3 a3=0 items=0 ppid=1 pid=1234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:11.498000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:26:11.499426 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:11.502561 systemd-logind[1099]: New session 7 of user core. Oct 2 19:26:11.503328 systemd[1]: Started session-7.scope. Oct 2 19:26:11.505000 audit[1234]: USER_START pid=1234 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.506000 audit[1236]: CRED_ACQ pid=1236 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:11.554000 audit[1237]: USER_ACCT pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.555000 audit[1237]: CRED_REFR pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:11.555845 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:26:11.556019 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:11.556000 audit[1237]: USER_START pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.064901 systemd[1]: Reloading. Oct 2 19:26:12.130440 /usr/lib/systemd/system-generators/torcx-generator[1267]: time="2023-10-02T19:26:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:12.130485 /usr/lib/systemd/system-generators/torcx-generator[1267]: time="2023-10-02T19:26:12Z" level=info msg="torcx already run" Oct 2 19:26:12.194565 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:12.194581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:12.214528 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit: BPF prog-id=34 op=LOAD Oct 2 19:26:12.271000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit: BPF prog-id=35 op=LOAD Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.271000 audit: BPF prog-id=36 op=LOAD Oct 2 19:26:12.271000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:26:12.271000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit: BPF prog-id=37 op=LOAD Oct 2 19:26:12.272000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit: BPF prog-id=38 op=LOAD Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.272000 audit: BPF prog-id=39 op=LOAD Oct 2 19:26:12.272000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:26:12.272000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.273000 audit: BPF prog-id=40 op=LOAD Oct 2 19:26:12.273000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit: BPF prog-id=41 op=LOAD Oct 2 19:26:12.274000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit: BPF prog-id=42 op=LOAD Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.275000 audit: BPF prog-id=43 op=LOAD Oct 2 19:26:12.275000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:26:12.275000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit: BPF prog-id=44 op=LOAD Oct 2 19:26:12.276000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit: BPF prog-id=45 op=LOAD Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.276000 audit: BPF prog-id=46 op=LOAD Oct 2 19:26:12.276000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:26:12.276000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.278000 audit: BPF prog-id=47 op=LOAD Oct 2 19:26:12.278000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:12.279000 audit: BPF prog-id=48 op=LOAD Oct 2 19:26:12.279000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:26:12.288764 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:26:12.293430 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:26:12.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.293982 systemd[1]: Reached target network-online.target. Oct 2 19:26:12.295381 systemd[1]: Started kubelet.service. Oct 2 19:26:12.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.305307 systemd[1]: Starting coreos-metadata.service... Oct 2 19:26:12.312912 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:26:12.313125 systemd[1]: Finished coreos-metadata.service. Oct 2 19:26:12.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.344363 kubelet[1307]: E1002 19:26:12.344200 1307 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:26:12.346337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:26:12.346513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:26:12.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:26:12.619960 systemd[1]: Stopped kubelet.service. Oct 2 19:26:12.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:12.635324 systemd[1]: Reloading. Oct 2 19:26:12.701270 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2023-10-02T19:26:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:12.701301 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2023-10-02T19:26:12Z" level=info msg="torcx already run" Oct 2 19:26:14.369836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:14.369852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:14.389204 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.442000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit: BPF prog-id=49 op=LOAD Oct 2 19:26:14.443000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit: BPF prog-id=50 op=LOAD Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit: BPF prog-id=51 op=LOAD Oct 2 19:26:14.443000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:26:14.443000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.443000 audit: BPF prog-id=52 op=LOAD Oct 2 19:26:14.444000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit: BPF prog-id=53 op=LOAD Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit: BPF prog-id=54 op=LOAD Oct 2 19:26:14.444000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:26:14.444000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit: BPF prog-id=55 op=LOAD Oct 2 19:26:14.445000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.445000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit: BPF prog-id=56 op=LOAD Oct 2 19:26:14.446000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit: BPF prog-id=57 op=LOAD Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.446000 audit: BPF prog-id=58 op=LOAD Oct 2 19:26:14.446000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:26:14.446000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit: BPF prog-id=59 op=LOAD Oct 2 19:26:14.447000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit: BPF prog-id=60 op=LOAD Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.447000 audit: BPF prog-id=61 op=LOAD Oct 2 19:26:14.447000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:26:14.447000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.449000 audit: BPF prog-id=62 op=LOAD Oct 2 19:26:14.449000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.451000 audit: BPF prog-id=63 op=LOAD Oct 2 19:26:14.451000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:26:14.462339 systemd[1]: Started kubelet.service. Oct 2 19:26:14.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:14.665954 kubelet[1416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:26:14.665954 kubelet[1416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:26:14.665954 kubelet[1416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:26:14.665954 kubelet[1416]: I1002 19:26:14.665899 1416 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:26:14.803043 kubelet[1416]: I1002 19:26:14.802980 1416 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:26:14.803043 kubelet[1416]: I1002 19:26:14.803027 1416 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:26:14.803328 kubelet[1416]: I1002 19:26:14.803307 1416 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:26:14.807813 kubelet[1416]: I1002 19:26:14.807749 1416 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:26:14.817917 kubelet[1416]: I1002 19:26:14.817879 1416 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:26:14.818178 kubelet[1416]: I1002 19:26:14.818149 1416 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:26:14.818487 kubelet[1416]: I1002 19:26:14.818461 1416 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:26:14.818595 kubelet[1416]: I1002 19:26:14.818499 1416 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:26:14.818595 kubelet[1416]: I1002 19:26:14.818520 1416 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:26:14.818661 kubelet[1416]: I1002 19:26:14.818642 1416 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:26:14.818772 kubelet[1416]: I1002 19:26:14.818745 1416 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:26:14.818838 kubelet[1416]: I1002 19:26:14.818816 1416 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:26:14.818862 kubelet[1416]: I1002 19:26:14.818846 1416 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:26:14.818902 kubelet[1416]: I1002 19:26:14.818881 1416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:26:14.819692 kubelet[1416]: E1002 19:26:14.819651 1416 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:14.819869 kubelet[1416]: E1002 19:26:14.819762 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:14.819915 kubelet[1416]: I1002 19:26:14.819811 1416 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:26:14.820522 kubelet[1416]: W1002 19:26:14.820492 1416 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:26:14.821013 kubelet[1416]: I1002 19:26:14.820987 1416 server.go:1232] "Started kubelet" Oct 2 19:26:14.821101 kubelet[1416]: I1002 19:26:14.821075 1416 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:26:14.821633 kubelet[1416]: I1002 19:26:14.821568 1416 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:26:14.821000 audit[1416]: AVC avc: denied { mac_admin } for pid=1416 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.821000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:26:14.821000 audit[1416]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00097b1a0 a1=c00007d350 a2=c00097b170 a3=25 items=0 ppid=1 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.821000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:26:14.821000 audit[1416]: AVC avc: denied { mac_admin } for pid=1416 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.821000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:26:14.821000 audit[1416]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bf8dc0 a1=c00007d368 a2=c00097b230 a3=25 items=0 ppid=1 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.821000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:26:14.822369 kubelet[1416]: I1002 19:26:14.821901 1416 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:26:14.822369 kubelet[1416]: I1002 19:26:14.822008 1416 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:26:14.822369 kubelet[1416]: I1002 19:26:14.822050 1416 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:26:14.822369 kubelet[1416]: I1002 19:26:14.822105 1416 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:26:14.822369 kubelet[1416]: I1002 19:26:14.822113 1416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:26:14.822495 kubelet[1416]: E1002 19:26:14.822410 1416 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:26:14.822495 kubelet[1416]: E1002 19:26:14.822448 1416 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:26:14.822982 kubelet[1416]: I1002 19:26:14.822960 1416 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:26:14.823148 kubelet[1416]: I1002 19:26:14.823128 1416 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:26:14.823282 kubelet[1416]: I1002 19:26:14.823268 1416 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:26:14.844207 kubelet[1416]: I1002 19:26:14.844167 1416 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:26:14.844207 kubelet[1416]: I1002 19:26:14.844195 1416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:26:14.844438 kubelet[1416]: I1002 19:26:14.844237 1416 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:26:14.844682 kubelet[1416]: E1002 19:26:14.844583 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba2a8bbc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 820961223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 820961223, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.844840 kubelet[1416]: E1002 19:26:14.844828 1416 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:26:14.844879 kubelet[1416]: W1002 19:26:14.844867 1416 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:26:14.844904 kubelet[1416]: E1002 19:26:14.844888 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:26:14.845173 kubelet[1416]: W1002 19:26:14.844939 1416 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:26:14.845173 kubelet[1416]: E1002 19:26:14.845001 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:26:14.845173 kubelet[1416]: W1002 19:26:14.845018 1416 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:26:14.845173 kubelet[1416]: E1002 19:26:14.845054 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:26:14.845814 kubelet[1416]: E1002 19:26:14.845467 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba2bf141d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 822425629, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 822425629, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.846598 kubelet[1416]: E1002 19:26:14.846517 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f84ade", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842952414, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842952414, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.847197 kubelet[1416]: E1002 19:26:14.847151 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f85c70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842956912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842956912, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.847776 kubelet[1416]: E1002 19:26:14.847693 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f86689", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842959497, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842959497, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.852000 audit[1429]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.852000 audit[1429]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd139aa790 a2=0 a3=7ffd139aa77c items=0 ppid=1416 pid=1429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:26:14.853000 audit[1435]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.853000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff9ce39040 a2=0 a3=7fff9ce3902c items=0 ppid=1416 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:26:14.855557 kubelet[1416]: I1002 19:26:14.855527 1416 policy_none.go:49] "None policy: Start" Oct 2 19:26:14.856276 kubelet[1416]: I1002 19:26:14.856258 1416 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:26:14.856345 kubelet[1416]: I1002 19:26:14.856290 1416 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:26:14.862332 systemd[1]: Created slice kubepods.slice. Oct 2 19:26:14.866162 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:26:14.868738 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:26:14.855000 audit[1437]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.855000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf52ce720 a2=0 a3=7ffdf52ce70c items=0 ppid=1416 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:26:14.871000 audit[1442]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.871000 audit[1442]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8e993c20 a2=0 a3=7fff8e993c0c items=0 ppid=1416 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:26:14.874654 kubelet[1416]: I1002 19:26:14.874614 1416 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:26:14.873000 audit[1416]: AVC avc: denied { mac_admin } for pid=1416 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:14.873000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:26:14.873000 audit[1416]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e38c00 a1=c000e420d8 a2=c000e38bd0 a3=25 items=0 ppid=1 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.873000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:26:14.874920 kubelet[1416]: I1002 19:26:14.874698 1416 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:26:14.874920 kubelet[1416]: I1002 19:26:14.874879 1416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:26:14.875829 kubelet[1416]: E1002 19:26:14.875735 1416 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.19\" not found" Oct 2 19:26:14.877707 kubelet[1416]: E1002 19:26:14.877604 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba5f538eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 876305643, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 876305643, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.909000 audit[1447]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.909000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe3af16740 a2=0 a3=7ffe3af1672c items=0 ppid=1416 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:26:14.911152 kubelet[1416]: I1002 19:26:14.911100 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:26:14.911000 audit[1448]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:14.911000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe7b3fc230 a2=0 a3=7ffe7b3fc21c items=0 ppid=1416 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:26:14.911000 audit[1449]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.911000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc892ce520 a2=0 a3=7ffc892ce50c items=0 ppid=1416 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:26:14.912529 kubelet[1416]: I1002 19:26:14.912370 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:26:14.912529 kubelet[1416]: I1002 19:26:14.912406 1416 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:26:14.912529 kubelet[1416]: I1002 19:26:14.912428 1416 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:26:14.912529 kubelet[1416]: E1002 19:26:14.912470 1416 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:26:14.912000 audit[1450]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:14.912000 audit[1450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe5211cf0 a2=0 a3=7fffe5211cdc items=0 ppid=1416 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:26:14.913000 audit[1451]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.913000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc8305e090 a2=0 a3=7ffc8305e07c items=0 ppid=1416 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:26:14.914010 kubelet[1416]: W1002 19:26:14.913918 1416 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:26:14.914010 kubelet[1416]: E1002 19:26:14.913943 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:26:14.913000 audit[1452]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:14.913000 audit[1452]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffca58b7de0 a2=0 a3=7ffca58b7dcc items=0 ppid=1416 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:26:14.914000 audit[1453]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:14.914000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe280c7820 a2=0 a3=7ffe280c780c items=0 ppid=1416 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:26:14.914000 audit[1454]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:14.914000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff42158520 a2=0 a3=7fff4215850c items=0 ppid=1416 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:14.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:26:14.925125 kubelet[1416]: I1002 19:26:14.924463 1416 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.19" Oct 2 19:26:14.926041 kubelet[1416]: E1002 19:26:14.926000 1416 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.19" Oct 2 19:26:14.926041 kubelet[1416]: E1002 19:26:14.925962 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f84ade", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842952414, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 924371163, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f84ade" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.926771 kubelet[1416]: E1002 19:26:14.926721 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f85c70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842956912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 924395649, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f85c70" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:14.927761 kubelet[1416]: E1002 19:26:14.927674 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f86689", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842959497, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 924399075, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f86689" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.046591 kubelet[1416]: E1002 19:26:15.046548 1416 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:26:15.127816 kubelet[1416]: I1002 19:26:15.127758 1416 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.19" Oct 2 19:26:15.129415 kubelet[1416]: E1002 19:26:15.129307 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f84ade", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842952414, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 127706780, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f84ade" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.129638 kubelet[1416]: E1002 19:26:15.129428 1416 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.19" Oct 2 19:26:15.130260 kubelet[1416]: E1002 19:26:15.130172 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f85c70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842956912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 127718863, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f85c70" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.130981 kubelet[1416]: E1002 19:26:15.130900 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f86689", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842959497, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 127721297, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f86689" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.449182 kubelet[1416]: E1002 19:26:15.449144 1416 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:26:15.530385 kubelet[1416]: I1002 19:26:15.530346 1416 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.19" Oct 2 19:26:15.531457 kubelet[1416]: E1002 19:26:15.531405 1416 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.19" Oct 2 19:26:15.531634 kubelet[1416]: E1002 19:26:15.531421 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f84ade", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842952414, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 530292823, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f84ade" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.532675 kubelet[1416]: E1002 19:26:15.532630 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f85c70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842956912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 530305797, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f85c70" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.533490 kubelet[1416]: E1002 19:26:15.533398 1416 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19.178a60eba3f86689", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.19", UID:"10.0.0.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.19"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 26, 14, 842959497, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 26, 15, 530311047, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.19"}': 'events "10.0.0.19.178a60eba3f86689" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:26:15.800623 kubelet[1416]: W1002 19:26:15.800499 1416 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:26:15.800623 kubelet[1416]: E1002 19:26:15.800538 1416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:26:15.807634 kubelet[1416]: I1002 19:26:15.807607 1416 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:26:15.820849 kubelet[1416]: E1002 19:26:15.820819 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:16.253849 kubelet[1416]: E1002 19:26:16.253718 1416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.19\" not found" node="10.0.0.19" Oct 2 19:26:16.315165 kubelet[1416]: E1002 19:26:16.315110 1416 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.19" not found Oct 2 19:26:16.332167 kubelet[1416]: I1002 19:26:16.332124 1416 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.19" Oct 2 19:26:16.335159 kubelet[1416]: I1002 19:26:16.335116 1416 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.19" Oct 2 19:26:16.345368 kubelet[1416]: I1002 19:26:16.345320 1416 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:26:16.345858 env[1110]: time="2023-10-02T19:26:16.345818073Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:26:16.346148 kubelet[1416]: I1002 19:26:16.346085 1416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:26:16.474767 sudo[1237]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:16.474000 audit[1237]: USER_END pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.476004 sshd[1234]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:16.477925 kernel: kauditd_printk_skb: 411 callbacks suppressed Oct 2 19:26:16.477969 kernel: audit: type=1106 audit(1696274776.474:547): pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.478007 kernel: audit: type=1104 audit(1696274776.474:548): pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.474000 audit[1237]: CRED_DISP pid=1237 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.478000 audit[1234]: USER_END pid=1234 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:16.480197 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:41618.service: Deactivated successfully. Oct 2 19:26:16.481036 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:26:16.481919 systemd-logind[1099]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:26:16.482660 systemd-logind[1099]: Removed session 7. Oct 2 19:26:16.482969 kernel: audit: type=1106 audit(1696274776.478:549): pid=1234 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:16.483018 kernel: audit: type=1104 audit(1696274776.478:550): pid=1234 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:16.478000 audit[1234]: CRED_DISP pid=1234 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:26:16.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.19:22-10.0.0.1:41618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.487341 kernel: audit: type=1131 audit(1696274776.479:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.19:22-10.0.0.1:41618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:16.820258 kubelet[1416]: I1002 19:26:16.820191 1416 apiserver.go:52] "Watching apiserver" Oct 2 19:26:16.821356 kubelet[1416]: E1002 19:26:16.821286 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:16.822634 kubelet[1416]: I1002 19:26:16.822587 1416 topology_manager.go:215] "Topology Admit Handler" podUID="0cb1d923-dcc7-461e-ae96-4f29a77bff62" podNamespace="calico-system" podName="calico-node-6pn5j" Oct 2 19:26:16.822716 kubelet[1416]: I1002 19:26:16.822704 1416 topology_manager.go:215] "Topology Admit Handler" podUID="d9497394-cb1c-49d0-ae72-4e3c823d6a6c" podNamespace="kube-system" podName="kube-proxy-x6vv7" Oct 2 19:26:16.823559 kubelet[1416]: I1002 19:26:16.823538 1416 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:26:16.828516 systemd[1]: Created slice kubepods-besteffort-podd9497394_cb1c_49d0_ae72_4e3c823d6a6c.slice. Oct 2 19:26:16.834883 kubelet[1416]: I1002 19:26:16.834825 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-policysync\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.834883 kubelet[1416]: I1002 19:26:16.834887 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cb1d923-dcc7-461e-ae96-4f29a77bff62-tigera-ca-bundle\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835084 kubelet[1416]: I1002 19:26:16.834916 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-cni-net-dir\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835084 kubelet[1416]: I1002 19:26:16.834945 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-flexvol-driver-host\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835084 kubelet[1416]: I1002 19:26:16.834976 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-xtables-lock\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835084 kubelet[1416]: I1002 19:26:16.835055 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0cb1d923-dcc7-461e-ae96-4f29a77bff62-node-certs\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835178 kubelet[1416]: I1002 19:26:16.835093 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-var-lib-calico\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835178 kubelet[1416]: I1002 19:26:16.835113 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6h25\" (UniqueName: \"kubernetes.io/projected/0cb1d923-dcc7-461e-ae96-4f29a77bff62-kube-api-access-c6h25\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835178 kubelet[1416]: I1002 19:26:16.835133 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9497394-cb1c-49d0-ae72-4e3c823d6a6c-kube-proxy\") pod \"kube-proxy-x6vv7\" (UID: \"d9497394-cb1c-49d0-ae72-4e3c823d6a6c\") " pod="kube-system/kube-proxy-x6vv7" Oct 2 19:26:16.835178 kubelet[1416]: I1002 19:26:16.835150 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-var-run-calico\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835178 kubelet[1416]: I1002 19:26:16.835165 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9497394-cb1c-49d0-ae72-4e3c823d6a6c-lib-modules\") pod \"kube-proxy-x6vv7\" (UID: \"d9497394-cb1c-49d0-ae72-4e3c823d6a6c\") " pod="kube-system/kube-proxy-x6vv7" Oct 2 19:26:16.835308 kubelet[1416]: I1002 19:26:16.835183 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg4qn\" (UniqueName: \"kubernetes.io/projected/d9497394-cb1c-49d0-ae72-4e3c823d6a6c-kube-api-access-pg4qn\") pod \"kube-proxy-x6vv7\" (UID: \"d9497394-cb1c-49d0-ae72-4e3c823d6a6c\") " pod="kube-system/kube-proxy-x6vv7" Oct 2 19:26:16.835308 kubelet[1416]: I1002 19:26:16.835212 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-lib-modules\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835308 kubelet[1416]: I1002 19:26:16.835245 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-cni-bin-dir\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835308 kubelet[1416]: I1002 19:26:16.835291 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0cb1d923-dcc7-461e-ae96-4f29a77bff62-cni-log-dir\") pod \"calico-node-6pn5j\" (UID: \"0cb1d923-dcc7-461e-ae96-4f29a77bff62\") " pod="calico-system/calico-node-6pn5j" Oct 2 19:26:16.835392 kubelet[1416]: I1002 19:26:16.835314 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9497394-cb1c-49d0-ae72-4e3c823d6a6c-xtables-lock\") pod \"kube-proxy-x6vv7\" (UID: \"d9497394-cb1c-49d0-ae72-4e3c823d6a6c\") " pod="kube-system/kube-proxy-x6vv7" Oct 2 19:26:16.837822 systemd[1]: Created slice kubepods-besteffort-pod0cb1d923_dcc7_461e_ae96_4f29a77bff62.slice. Oct 2 19:26:16.937417 kubelet[1416]: E1002 19:26:16.937368 1416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:26:16.937417 kubelet[1416]: W1002 19:26:16.937396 1416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:26:16.937417 kubelet[1416]: E1002 19:26:16.937431 1416 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:26:16.937681 kubelet[1416]: E1002 19:26:16.937664 1416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:26:16.937681 kubelet[1416]: W1002 19:26:16.937678 1416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:26:16.937778 kubelet[1416]: E1002 19:26:16.937691 1416 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:26:16.939506 kubelet[1416]: E1002 19:26:16.939482 1416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:26:16.939506 kubelet[1416]: W1002 19:26:16.939503 1416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:26:16.939606 kubelet[1416]: E1002 19:26:16.939526 1416 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:26:16.946012 kubelet[1416]: E1002 19:26:16.945980 1416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:26:16.946012 kubelet[1416]: W1002 19:26:16.945995 1416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:26:16.946012 kubelet[1416]: E1002 19:26:16.946013 1416 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:26:16.951087 kubelet[1416]: E1002 19:26:16.951066 1416 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 2 19:26:16.951087 kubelet[1416]: W1002 19:26:16.951078 1416 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 2 19:26:16.951087 kubelet[1416]: E1002 19:26:16.951091 1416 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 2 19:26:17.135956 kubelet[1416]: E1002 19:26:17.135779 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:17.136626 env[1110]: time="2023-10-02T19:26:17.136573222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6vv7,Uid:d9497394-cb1c-49d0-ae72-4e3c823d6a6c,Namespace:kube-system,Attempt:0,}" Oct 2 19:26:17.139616 kubelet[1416]: E1002 19:26:17.139590 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:17.140375 env[1110]: time="2023-10-02T19:26:17.140310941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pn5j,Uid:0cb1d923-dcc7-461e-ae96-4f29a77bff62,Namespace:calico-system,Attempt:0,}" Oct 2 19:26:17.822276 kubelet[1416]: E1002 19:26:17.822213 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:18.119985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343332348.mount: Deactivated successfully. Oct 2 19:26:18.126721 env[1110]: time="2023-10-02T19:26:18.126671880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.127687 env[1110]: time="2023-10-02T19:26:18.127638463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.129987 env[1110]: time="2023-10-02T19:26:18.129954967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.131204 env[1110]: time="2023-10-02T19:26:18.131175666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.132273 env[1110]: time="2023-10-02T19:26:18.132249280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.133571 env[1110]: time="2023-10-02T19:26:18.133550991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.135085 env[1110]: time="2023-10-02T19:26:18.135060301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.136762 env[1110]: time="2023-10-02T19:26:18.136721867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:18.161762 env[1110]: time="2023-10-02T19:26:18.161671437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:18.161762 env[1110]: time="2023-10-02T19:26:18.161738663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:18.161762 env[1110]: time="2023-10-02T19:26:18.161752599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:18.162426 env[1110]: time="2023-10-02T19:26:18.161956351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b pid=1479 runtime=io.containerd.runc.v2 Oct 2 19:26:18.164926 env[1110]: time="2023-10-02T19:26:18.163836868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:18.164926 env[1110]: time="2023-10-02T19:26:18.163882523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:18.164926 env[1110]: time="2023-10-02T19:26:18.163895047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:18.164926 env[1110]: time="2023-10-02T19:26:18.164047072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84d4c7ada786b1d979319993f655b0ee78081820544b7f68aeb1fe6c54b15d4f pid=1488 runtime=io.containerd.runc.v2 Oct 2 19:26:18.174342 systemd[1]: Started cri-containerd-b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b.scope. Oct 2 19:26:18.187055 systemd[1]: Started cri-containerd-84d4c7ada786b1d979319993f655b0ee78081820544b7f68aeb1fe6c54b15d4f.scope. Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196048 kernel: audit: type=1400 audit(1696274778.188:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196171 kernel: audit: type=1400 audit(1696274778.188:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196202 kernel: audit: type=1400 audit(1696274778.188:554): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.200826 kernel: audit: type=1400 audit(1696274778.188:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.200995 kernel: audit: type=1400 audit(1696274778.188:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit: BPF prog-id=64 op=LOAD Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00023dc48 a2=10 a3=1c items=0 ppid=1479 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626134646532396639313733303261383438653735643864306165 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00023d6b0 a2=3c a3=c items=0 ppid=1479 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626134646532396639313733303261383438653735643864306165 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.191000 audit: BPF prog-id=65 op=LOAD Oct 2 19:26:18.191000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00023d9d8 a2=78 a3=c000092ce0 items=0 ppid=1479 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626134646532396639313733303261383438653735643864306165 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.195000 audit: BPF prog-id=66 op=LOAD Oct 2 19:26:18.195000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00023d770 a2=78 a3=c000092d28 items=0 ppid=1479 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626134646532396639313733303261383438653735643864306165 Oct 2 19:26:18.196000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:26:18.196000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.196000 audit: BPF prog-id=67 op=LOAD Oct 2 19:26:18.196000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00023dc30 a2=78 a3=c000093138 items=0 ppid=1479 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235626134646532396639313733303261383438653735643864306165 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit: BPF prog-id=68 op=LOAD Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=1488 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643463376164613738366231643937393331393939336636353562 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=1488 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643463376164613738366231643937393331393939336636353562 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.211000 audit: BPF prog-id=69 op=LOAD Oct 2 19:26:18.211000 audit[1509]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000024420 items=0 ppid=1488 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643463376164613738366231643937393331393939336636353562 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit: BPF prog-id=70 op=LOAD Oct 2 19:26:18.212000 audit[1509]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000024468 items=0 ppid=1488 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643463376164613738366231643937393331393939336636353562 Oct 2 19:26:18.212000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:26:18.212000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { perfmon } for pid=1509 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit[1509]: AVC avc: denied { bpf } for pid=1509 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:18.212000 audit: BPF prog-id=71 op=LOAD Oct 2 19:26:18.212000 audit[1509]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c000024878 items=0 ppid=1488 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:18.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643463376164613738366231643937393331393939336636353562 Oct 2 19:26:18.228228 env[1110]: time="2023-10-02T19:26:18.228164254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pn5j,Uid:0cb1d923-dcc7-461e-ae96-4f29a77bff62,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\"" Oct 2 19:26:18.229418 kubelet[1416]: E1002 19:26:18.229385 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:18.230923 env[1110]: time="2023-10-02T19:26:18.230767286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0\"" Oct 2 19:26:18.232867 env[1110]: time="2023-10-02T19:26:18.232826478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6vv7,Uid:d9497394-cb1c-49d0-ae72-4e3c823d6a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"84d4c7ada786b1d979319993f655b0ee78081820544b7f68aeb1fe6c54b15d4f\"" Oct 2 19:26:18.234062 kubelet[1416]: E1002 19:26:18.233848 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:18.822708 kubelet[1416]: E1002 19:26:18.822621 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:19.823882 kubelet[1416]: E1002 19:26:19.823811 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:20.824850 kubelet[1416]: E1002 19:26:20.824783 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:20.997087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068298879.mount: Deactivated successfully. Oct 2 19:26:21.825351 kubelet[1416]: E1002 19:26:21.825285 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:22.220096 env[1110]: time="2023-10-02T19:26:22.219947563Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:22.221843 env[1110]: time="2023-10-02T19:26:22.221810316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed8b7bbb113fecfcce8e15c7d7232b3fe31ed6f37b04df455f6a3f2bc8695d72,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:22.224522 env[1110]: time="2023-10-02T19:26:22.224471036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:22.226714 env[1110]: time="2023-10-02T19:26:22.226654481Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:182a323c25a3503be8c504892a12a55d99a42c3a582cb8e93a1ecc7c193a44c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:22.227749 env[1110]: time="2023-10-02T19:26:22.227701815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.25.0\" returns image reference \"sha256:ed8b7bbb113fecfcce8e15c7d7232b3fe31ed6f37b04df455f6a3f2bc8695d72\"" Oct 2 19:26:22.228589 env[1110]: time="2023-10-02T19:26:22.228557309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:26:22.230115 env[1110]: time="2023-10-02T19:26:22.230080326Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 2 19:26:22.244392 env[1110]: time="2023-10-02T19:26:22.244326903Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f\"" Oct 2 19:26:22.245138 env[1110]: time="2023-10-02T19:26:22.245103389Z" level=info msg="StartContainer for \"4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f\"" Oct 2 19:26:22.270734 systemd[1]: Started cri-containerd-4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f.scope. Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.306477 kernel: kauditd_printk_skb: 109 callbacks suppressed Oct 2 19:26:22.306619 kernel: audit: type=1400 audit(1696274782.297:588): avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.306640 kernel: audit: type=1300 audit(1696274782.297:588): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:22.306655 kernel: audit: type=1327 audit(1696274782.297:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366465393935646563396535663233316336373562646536663666 Oct 2 19:26:22.297000 audit[1554]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:22.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366465393935646563396535663233316336373562646536663666 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.310742 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.310807 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.314753 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.314815 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.318653 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.318686 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.321878 kernel: audit: type=1400 audit(1696274782.297:589): avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.297000 audit: BPF prog-id=72 op=LOAD Oct 2 19:26:22.297000 audit[1554]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000099030 items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:22.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366465393935646563396535663233316336373562646536663666 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.300000 audit: BPF prog-id=73 op=LOAD Oct 2 19:26:22.300000 audit[1554]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000099078 items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:22.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366465393935646563396535663233316336373562646536663666 Oct 2 19:26:22.303000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:26:22.303000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { perfmon } for pid=1554 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit[1554]: AVC avc: denied { bpf } for pid=1554 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:22.303000 audit: BPF prog-id=74 op=LOAD Oct 2 19:26:22.303000 audit[1554]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c000099108 items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:22.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464366465393935646563396535663233316336373562646536663666 Oct 2 19:26:22.351803 env[1110]: time="2023-10-02T19:26:22.348132866Z" level=info msg="StartContainer for \"4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f\" returns successfully" Oct 2 19:26:22.355117 systemd[1]: cri-containerd-4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f.scope: Deactivated successfully. Oct 2 19:26:22.361000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:26:22.371907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f-rootfs.mount: Deactivated successfully. Oct 2 19:26:22.826379 kubelet[1416]: E1002 19:26:22.826320 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:22.905093 env[1110]: time="2023-10-02T19:26:22.905032392Z" level=info msg="shim disconnected" id=4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f Oct 2 19:26:22.905093 env[1110]: time="2023-10-02T19:26:22.905082115Z" level=warning msg="cleaning up after shim disconnected" id=4d6de995dec9e5f231c675bde6f6f60b12a84f04e28ba305973f52776071aa5f namespace=k8s.io Oct 2 19:26:22.905093 env[1110]: time="2023-10-02T19:26:22.905094428Z" level=info msg="cleaning up dead shim" Oct 2 19:26:22.914945 env[1110]: time="2023-10-02T19:26:22.914894196Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:26:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1595 runtime=io.containerd.runc.v2\n" Oct 2 19:26:23.206218 kubelet[1416]: E1002 19:26:23.205932 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:23.827414 kubelet[1416]: E1002 19:26:23.827369 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:24.828476 kubelet[1416]: E1002 19:26:24.828411 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:25.177713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680108028.mount: Deactivated successfully. Oct 2 19:26:25.828763 kubelet[1416]: E1002 19:26:25.828705 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:25.891537 env[1110]: time="2023-10-02T19:26:25.891455531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:25.893348 env[1110]: time="2023-10-02T19:26:25.893314107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:25.894856 env[1110]: time="2023-10-02T19:26:25.894814080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:25.896002 env[1110]: time="2023-10-02T19:26:25.895974626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:25.896235 env[1110]: time="2023-10-02T19:26:25.896203505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 19:26:25.896814 env[1110]: time="2023-10-02T19:26:25.896749018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.25.0\"" Oct 2 19:26:25.897877 env[1110]: time="2023-10-02T19:26:25.897842369Z" level=info msg="CreateContainer within sandbox \"84d4c7ada786b1d979319993f655b0ee78081820544b7f68aeb1fe6c54b15d4f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:26:25.910877 env[1110]: time="2023-10-02T19:26:25.910822341Z" level=info msg="CreateContainer within sandbox \"84d4c7ada786b1d979319993f655b0ee78081820544b7f68aeb1fe6c54b15d4f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0028775cec4921a639a6434839cb165448f4b818fa09b5a0908365e14c80293b\"" Oct 2 19:26:25.911374 env[1110]: time="2023-10-02T19:26:25.911336024Z" level=info msg="StartContainer for \"0028775cec4921a639a6434839cb165448f4b818fa09b5a0908365e14c80293b\"" Oct 2 19:26:25.927921 systemd[1]: Started cri-containerd-0028775cec4921a639a6434839cb165448f4b818fa09b5a0908365e14c80293b.scope. Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1488 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:25.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323837373563656334393231613633396136343334383339636231 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit: BPF prog-id=75 op=LOAD Oct 2 19:26:25.939000 audit[1620]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0002d7be0 items=0 ppid=1488 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:25.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323837373563656334393231613633396136343334383339636231 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.939000 audit: BPF prog-id=76 op=LOAD Oct 2 19:26:25.939000 audit[1620]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0002d7c28 items=0 ppid=1488 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:25.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323837373563656334393231613633396136343334383339636231 Oct 2 19:26:25.940000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:26:25.940000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:25.940000 audit: BPF prog-id=77 op=LOAD Oct 2 19:26:25.940000 audit[1620]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002d7cb8 items=0 ppid=1488 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:25.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323837373563656334393231613633396136343334383339636231 Oct 2 19:26:25.954380 env[1110]: time="2023-10-02T19:26:25.954303243Z" level=info msg="StartContainer for \"0028775cec4921a639a6434839cb165448f4b818fa09b5a0908365e14c80293b\" returns successfully" Oct 2 19:26:26.003000 audit[1674]: NETFILTER_CFG table=mangle:14 family=10 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.003000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdebd051d0 a2=0 a3=7ffdebd051bc items=0 ppid=1630 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:26:26.004000 audit[1673]: NETFILTER_CFG table=mangle:15 family=2 entries=1 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.004000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffece35dd80 a2=0 a3=7ffece35dd6c items=0 ppid=1630 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:26:26.004000 audit[1675]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.004000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdd1675f0 a2=0 a3=7ffcdd1675dc items=0 ppid=1630 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:26:26.005000 audit[1676]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.005000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff192f4d50 a2=0 a3=7fff192f4d3c items=0 ppid=1630 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:26:26.005000 audit[1677]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.005000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc798ce710 a2=0 a3=7ffc798ce6fc items=0 ppid=1630 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:26:26.006000 audit[1678]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.006000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0ef81260 a2=0 a3=7ffd0ef8124c items=0 ppid=1630 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:26:26.104000 audit[1679]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.104000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda9fb37d0 a2=0 a3=7ffda9fb37bc items=0 ppid=1630 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:26:26.107000 audit[1681]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.107000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeab4409d0 a2=0 a3=7ffeab4409bc items=0 ppid=1630 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:26:26.111000 audit[1684]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.111000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff1ca56920 a2=0 a3=7fff1ca5690c items=0 ppid=1630 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:26:26.112000 audit[1685]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.112000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca8aeadf0 a2=0 a3=7ffca8aeaddc items=0 ppid=1630 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:26:26.114000 audit[1687]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.114000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd2c505e0 a2=0 a3=7fffd2c505cc items=0 ppid=1630 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:26:26.115000 audit[1688]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.115000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc032043e0 a2=0 a3=7ffc032043cc items=0 ppid=1630 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:26:26.118000 audit[1690]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.118000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc8a6a4d0 a2=0 a3=7ffdc8a6a4bc items=0 ppid=1630 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:26:26.121000 audit[1693]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.121000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe87d6db80 a2=0 a3=7ffe87d6db6c items=0 ppid=1630 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:26:26.122000 audit[1694]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.122000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2a711890 a2=0 a3=7ffd2a71187c items=0 ppid=1630 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:26:26.124000 audit[1696]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.124000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc656b1750 a2=0 a3=7ffc656b173c items=0 ppid=1630 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:26:26.125000 audit[1697]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.125000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe32082740 a2=0 a3=7ffe3208272c items=0 ppid=1630 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:26:26.127000 audit[1699]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.127000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe286c9fe0 a2=0 a3=7ffe286c9fcc items=0 ppid=1630 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:26:26.130000 audit[1702]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.130000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff370a3c30 a2=0 a3=7fff370a3c1c items=0 ppid=1630 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:26:26.133000 audit[1705]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.133000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdf00dd30 a2=0 a3=7fffdf00dd1c items=0 ppid=1630 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:26:26.134000 audit[1706]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.134000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca0cd4e70 a2=0 a3=7ffca0cd4e5c items=0 ppid=1630 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.134000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:26:26.136000 audit[1708]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.136000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffecd5b9480 a2=0 a3=7ffecd5b946c items=0 ppid=1630 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:26:26.159000 audit[1713]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.159000 audit[1713]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff9c129c10 a2=0 a3=7fff9c129bfc items=0 ppid=1630 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:26:26.160000 audit[1714]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.160000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc0bf98c0 a2=0 a3=7fffc0bf98ac items=0 ppid=1630 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:26:26.162000 audit[1716]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:26:26.162000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffca371c780 a2=0 a3=7ffca371c76c items=0 ppid=1630 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:26:26.175000 audit[1722]: NETFILTER_CFG table=filter:39 family=2 entries=9 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:26.175000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=5660 a0=3 a1=7ffd6a491710 a2=0 a3=7ffd6a4916fc items=0 ppid=1630 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:26.185000 audit[1722]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:26.185000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd6a491710 a2=0 a3=7ffd6a4916fc items=0 ppid=1630 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:26.186000 audit[1728]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.186000 audit[1728]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcdb3ec4d0 a2=0 a3=7ffcdb3ec4bc items=0 ppid=1630 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:26:26.188000 audit[1730]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.188000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc4761d1c0 a2=0 a3=7ffc4761d1ac items=0 ppid=1630 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:26:26.192000 audit[1733]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.192000 audit[1733]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc39d52050 a2=0 a3=7ffc39d5203c items=0 ppid=1630 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:26:26.193000 audit[1734]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.193000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9d61a050 a2=0 a3=7fff9d61a03c items=0 ppid=1630 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:26:26.195000 audit[1736]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.195000 audit[1736]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcfdc89960 a2=0 a3=7ffcfdc8994c items=0 ppid=1630 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:26:26.195000 audit[1737]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.195000 audit[1737]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbe6772c0 a2=0 a3=7ffcbe6772ac items=0 ppid=1630 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:26:26.201000 audit[1739]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.201000 audit[1739]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec6950ab0 a2=0 a3=7ffec6950a9c items=0 ppid=1630 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:26:26.203000 audit[1742]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.203000 audit[1742]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd9cde9090 a2=0 a3=7ffd9cde907c items=0 ppid=1630 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:26:26.204000 audit[1743]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.204000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4e8ad9f0 a2=0 a3=7ffc4e8ad9dc items=0 ppid=1630 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:26:26.206000 audit[1745]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.206000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe38459f10 a2=0 a3=7ffe38459efc items=0 ppid=1630 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:26:26.207000 audit[1746]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.207000 audit[1746]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8c7263b0 a2=0 a3=7fff8c72639c items=0 ppid=1630 pid=1746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:26:26.210893 kubelet[1416]: E1002 19:26:26.210845 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:26.210000 audit[1748]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.210000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddaf18eb0 a2=0 a3=7ffddaf18e9c items=0 ppid=1630 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:26:26.213000 audit[1751]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.213000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde1ac70a0 a2=0 a3=7ffde1ac708c items=0 ppid=1630 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:26:26.216000 audit[1754]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.216000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefa8317c0 a2=0 a3=7ffefa8317ac items=0 ppid=1630 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:26:26.218033 kubelet[1416]: I1002 19:26:26.218009 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x6vv7" podStartSLOduration=2.5559433670000002 podCreationTimestamp="2023-10-02 19:26:16 +0000 UTC" firstStartedPulling="2023-10-02 19:26:18.234490909 +0000 UTC m=+3.768204920" lastFinishedPulling="2023-10-02 19:26:25.896513266 +0000 UTC m=+11.430227277" observedRunningTime="2023-10-02 19:26:26.217604226 +0000 UTC m=+11.751318237" watchObservedRunningTime="2023-10-02 19:26:26.217965724 +0000 UTC m=+11.751679735" Oct 2 19:26:26.217000 audit[1755]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.217000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe33f54ed0 a2=0 a3=7ffe33f54ebc items=0 ppid=1630 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:26:26.219000 audit[1757]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.219000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffed7fc7450 a2=0 a3=7ffed7fc743c items=0 ppid=1630 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:26:26.222000 audit[1760]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.222000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc0c14ea70 a2=0 a3=7ffc0c14ea5c items=0 ppid=1630 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:26:26.223000 audit[1761]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.223000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e3627d0 a2=0 a3=7fff2e3627bc items=0 ppid=1630 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:26:26.225000 audit[1763]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.225000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc23e3da90 a2=0 a3=7ffc23e3da7c items=0 ppid=1630 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:26:26.226000 audit[1764]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.226000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc99109a0 a2=0 a3=7ffcc991098c items=0 ppid=1630 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:26:26.228000 audit[1766]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.228000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe2b1e8f80 a2=0 a3=7ffe2b1e8f6c items=0 ppid=1630 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:26:26.231000 audit[1769]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:26:26.231000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff75f56b50 a2=0 a3=7fff75f56b3c items=0 ppid=1630 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:26:26.233000 audit[1771]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:26:26.233000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd5a451570 a2=0 a3=7ffd5a45155c items=0 ppid=1630 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:26.233000 audit[1771]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:26:26.233000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffd5a451570 a2=0 a3=7ffd5a45155c items=0 ppid=1630 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:26.829361 kubelet[1416]: E1002 19:26:26.829305 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:26.943000 audit[1773]: NETFILTER_CFG table=filter:65 family=2 entries=15 op=nft_register_rule pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:26.943000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffceaa9bea0 a2=0 a3=7ffceaa9be8c items=0 ppid=1630 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:26.944000 audit[1773]: NETFILTER_CFG table=nat:66 family=2 entries=19 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:26.944000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=6068 a0=3 a1=7ffceaa9bea0 a2=0 a3=7ffceaa9be8c items=0 ppid=1630 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:26.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:27.212764 kubelet[1416]: E1002 19:26:27.212640 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:27.829833 kubelet[1416]: E1002 19:26:27.829761 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:28.830420 kubelet[1416]: E1002 19:26:28.830331 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:29.263007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792718900.mount: Deactivated successfully. Oct 2 19:26:29.831045 kubelet[1416]: E1002 19:26:29.830992 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:30.831386 kubelet[1416]: E1002 19:26:30.831303 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:31.832066 kubelet[1416]: E1002 19:26:31.832027 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:32.833071 kubelet[1416]: E1002 19:26:32.833002 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:33.833422 kubelet[1416]: E1002 19:26:33.833361 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:34.819410 kubelet[1416]: E1002 19:26:34.819371 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:34.833584 kubelet[1416]: E1002 19:26:34.833540 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:34.928908 env[1110]: time="2023-10-02T19:26:34.928830876Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:34.931121 env[1110]: time="2023-10-02T19:26:34.931055418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:34.935025 env[1110]: time="2023-10-02T19:26:34.934989275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:34.939883 env[1110]: time="2023-10-02T19:26:34.939839010Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:34bf454be8cd5b9a35ab29c2479ff68a26497c2c87eb606e4bfe57c7fbeeff35,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:34.940440 env[1110]: time="2023-10-02T19:26:34.940402567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.25.0\" returns image reference \"sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7\"" Oct 2 19:26:34.942093 env[1110]: time="2023-10-02T19:26:34.942053272Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 2 19:26:34.956159 env[1110]: time="2023-10-02T19:26:34.956109633Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7\"" Oct 2 19:26:34.956577 env[1110]: time="2023-10-02T19:26:34.956547484Z" level=info msg="StartContainer for \"0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7\"" Oct 2 19:26:34.971971 systemd[1]: Started cri-containerd-0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7.scope. Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.990712 kernel: kauditd_printk_skb: 236 callbacks suppressed Oct 2 19:26:34.990806 kernel: audit: type=1400 audit(1696274794.987:654): avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.990832 kernel: audit: type=1300 audit(1696274794.987:654): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1479 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:34.987000 audit[1782]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1479 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:34.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062646239323065616630623735346261323166376633623565326239 Oct 2 19:26:34.996250 kernel: audit: type=1327 audit(1696274794.987:654): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062646239323065616630623735346261323166376633623565326239 Oct 2 19:26:34.996295 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.998067 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.999986 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:35.003881 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:35.003926 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:35.007741 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:35.010840 kernel: audit: type=1400 audit(1696274794.987:655): avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.987000 audit: BPF prog-id=78 op=LOAD Oct 2 19:26:34.987000 audit[1782]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00026d710 items=0 ppid=1479 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:34.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062646239323065616630623735346261323166376633623565326239 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.989000 audit: BPF prog-id=79 op=LOAD Oct 2 19:26:34.989000 audit[1782]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00026d758 items=0 ppid=1479 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:34.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062646239323065616630623735346261323166376633623565326239 Oct 2 19:26:34.992000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:26:34.992000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { perfmon } for pid=1782 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit[1782]: AVC avc: denied { bpf } for pid=1782 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:34.992000 audit: BPF prog-id=80 op=LOAD Oct 2 19:26:34.992000 audit[1782]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00026d7e8 items=0 ppid=1479 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:34.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062646239323065616630623735346261323166376633623565326239 Oct 2 19:26:35.015033 env[1110]: time="2023-10-02T19:26:35.014999923Z" level=info msg="StartContainer for \"0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7\" returns successfully" Oct 2 19:26:35.227540 kubelet[1416]: E1002 19:26:35.227495 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:35.834415 kubelet[1416]: E1002 19:26:35.834352 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:36.229152 kubelet[1416]: E1002 19:26:36.229117 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:36.658565 systemd[1]: cri-containerd-0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7.scope: Deactivated successfully. Oct 2 19:26:36.661000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:26:36.674697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7-rootfs.mount: Deactivated successfully. Oct 2 19:26:36.709474 kubelet[1416]: I1002 19:26:36.707501 1416 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Oct 2 19:26:36.742590 kubelet[1416]: I1002 19:26:36.742532 1416 topology_manager.go:215] "Topology Admit Handler" podUID="0ac60edc-a9a5-4566-a663-7a49486a549a" podNamespace="kube-system" podName="coredns-5dd5756b68-kq6xj" Oct 2 19:26:36.742767 kubelet[1416]: I1002 19:26:36.742696 1416 topology_manager.go:215] "Topology Admit Handler" podUID="76dde907-d81f-4af1-8608-00e5081994e4" podNamespace="calico-system" podName="calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:26:36.743167 kubelet[1416]: I1002 19:26:36.743141 1416 topology_manager.go:215] "Topology Admit Handler" podUID="abf6e2c9-193c-4296-8247-02d6e5da6ae3" podNamespace="kube-system" podName="coredns-5dd5756b68-8c5qr" Oct 2 19:26:36.748114 systemd[1]: Created slice kubepods-burstable-pod0ac60edc_a9a5_4566_a663_7a49486a549a.slice. Oct 2 19:26:36.759048 systemd[1]: Created slice kubepods-besteffort-pod76dde907_d81f_4af1_8608_00e5081994e4.slice. Oct 2 19:26:36.762254 systemd[1]: Created slice kubepods-burstable-podabf6e2c9_193c_4296_8247_02d6e5da6ae3.slice. Oct 2 19:26:36.765814 kubelet[1416]: I1002 19:26:36.765779 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbb4z\" (UniqueName: \"kubernetes.io/projected/abf6e2c9-193c-4296-8247-02d6e5da6ae3-kube-api-access-hbb4z\") pod \"coredns-5dd5756b68-8c5qr\" (UID: \"abf6e2c9-193c-4296-8247-02d6e5da6ae3\") " pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:26:36.765889 kubelet[1416]: I1002 19:26:36.765827 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfzz\" (UniqueName: \"kubernetes.io/projected/0ac60edc-a9a5-4566-a663-7a49486a549a-kube-api-access-prfzz\") pod \"coredns-5dd5756b68-kq6xj\" (UID: \"0ac60edc-a9a5-4566-a663-7a49486a549a\") " pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:26:36.765925 kubelet[1416]: I1002 19:26:36.765888 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abf6e2c9-193c-4296-8247-02d6e5da6ae3-config-volume\") pod \"coredns-5dd5756b68-8c5qr\" (UID: \"abf6e2c9-193c-4296-8247-02d6e5da6ae3\") " pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:26:36.765962 kubelet[1416]: I1002 19:26:36.765933 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ac60edc-a9a5-4566-a663-7a49486a549a-config-volume\") pod \"coredns-5dd5756b68-kq6xj\" (UID: \"0ac60edc-a9a5-4566-a663-7a49486a549a\") " pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:26:36.765990 kubelet[1416]: I1002 19:26:36.765976 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zff\" (UniqueName: \"kubernetes.io/projected/76dde907-d81f-4af1-8608-00e5081994e4-kube-api-access-p6zff\") pod \"calico-kube-controllers-74b9887bb6-g8t2d\" (UID: \"76dde907-d81f-4af1-8608-00e5081994e4\") " pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:26:36.835180 kubelet[1416]: E1002 19:26:36.835112 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:36.873910 kubelet[1416]: I1002 19:26:36.873858 1416 topology_manager.go:215] "Topology Admit Handler" podUID="b0822001-b43f-4855-b401-678c43b136af" podNamespace="calico-system" podName="csi-node-driver-75kzt" Oct 2 19:26:36.897234 systemd[1]: Created slice kubepods-besteffort-podb0822001_b43f_4855_b401_678c43b136af.slice. Oct 2 19:26:36.967680 kubelet[1416]: I1002 19:26:36.967010 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0822001-b43f-4855-b401-678c43b136af-socket-dir\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:36.967680 kubelet[1416]: I1002 19:26:36.967067 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b0822001-b43f-4855-b401-678c43b136af-registration-dir\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:36.967680 kubelet[1416]: I1002 19:26:36.967240 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0822001-b43f-4855-b401-678c43b136af-kubelet-dir\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:36.967680 kubelet[1416]: I1002 19:26:36.967274 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b0822001-b43f-4855-b401-678c43b136af-varrun\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:36.967680 kubelet[1416]: I1002 19:26:36.967300 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etccalico\" (UniqueName: \"kubernetes.io/host-path/b0822001-b43f-4855-b401-678c43b136af-etccalico\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:36.967984 kubelet[1416]: I1002 19:26:36.967350 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rj7\" (UniqueName: \"kubernetes.io/projected/b0822001-b43f-4855-b401-678c43b136af-kube-api-access-k9rj7\") pod \"csi-node-driver-75kzt\" (UID: \"b0822001-b43f-4855-b401-678c43b136af\") " pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:37.010604 env[1110]: time="2023-10-02T19:26:37.010539192Z" level=info msg="shim disconnected" id=0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7 Oct 2 19:26:37.010604 env[1110]: time="2023-10-02T19:26:37.010601579Z" level=warning msg="cleaning up after shim disconnected" id=0bdb920eaf0b754ba21f7f3b5e2b968016a95e1d696de4b3b53c96735624ebc7 namespace=k8s.io Oct 2 19:26:37.010604 env[1110]: time="2023-10-02T19:26:37.010613311Z" level=info msg="cleaning up dead shim" Oct 2 19:26:37.017698 env[1110]: time="2023-10-02T19:26:37.017636061Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:26:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1831 runtime=io.containerd.runc.v2\n" Oct 2 19:26:37.058587 kubelet[1416]: E1002 19:26:37.058540 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:37.059226 env[1110]: time="2023-10-02T19:26:37.059184639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kq6xj,Uid:0ac60edc-a9a5-4566-a663-7a49486a549a,Namespace:kube-system,Attempt:0,}" Oct 2 19:26:37.061816 env[1110]: time="2023-10-02T19:26:37.061761322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-g8t2d,Uid:76dde907-d81f-4af1-8608-00e5081994e4,Namespace:calico-system,Attempt:0,}" Oct 2 19:26:37.063957 kubelet[1416]: E1002 19:26:37.063924 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:37.064446 env[1110]: time="2023-10-02T19:26:37.064422112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8c5qr,Uid:abf6e2c9-193c-4296-8247-02d6e5da6ae3,Namespace:kube-system,Attempt:0,}" Oct 2 19:26:37.129814 env[1110]: time="2023-10-02T19:26:37.129722363Z" level=error msg="Failed to destroy network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.130089 env[1110]: time="2023-10-02T19:26:37.130060587Z" level=error msg="encountered an error cleaning up failed sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.130174 env[1110]: time="2023-10-02T19:26:37.130106163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kq6xj,Uid:0ac60edc-a9a5-4566-a663-7a49486a549a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.130429 kubelet[1416]: E1002 19:26:37.130400 1416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.130514 kubelet[1416]: E1002 19:26:37.130487 1416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:26:37.130514 kubelet[1416]: E1002 19:26:37.130511 1416 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:26:37.130594 kubelet[1416]: E1002 19:26:37.130580 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-kq6xj_kube-system(0ac60edc-a9a5-4566-a663-7a49486a549a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-kq6xj_kube-system(0ac60edc-a9a5-4566-a663-7a49486a549a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kq6xj" podUID="0ac60edc-a9a5-4566-a663-7a49486a549a" Oct 2 19:26:37.160937 env[1110]: time="2023-10-02T19:26:37.160680851Z" level=error msg="Failed to destroy network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.161852 env[1110]: time="2023-10-02T19:26:37.161712165Z" level=error msg="encountered an error cleaning up failed sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.162017 env[1110]: time="2023-10-02T19:26:37.161917130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-g8t2d,Uid:76dde907-d81f-4af1-8608-00e5081994e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.162576 kubelet[1416]: E1002 19:26:37.162545 1416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.162642 kubelet[1416]: E1002 19:26:37.162621 1416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:26:37.162672 kubelet[1416]: E1002 19:26:37.162649 1416 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:26:37.162742 kubelet[1416]: E1002 19:26:37.162723 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b9887bb6-g8t2d_calico-system(76dde907-d81f-4af1-8608-00e5081994e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b9887bb6-g8t2d_calico-system(76dde907-d81f-4af1-8608-00e5081994e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" podUID="76dde907-d81f-4af1-8608-00e5081994e4" Oct 2 19:26:37.163241 env[1110]: time="2023-10-02T19:26:37.163170139Z" level=error msg="Failed to destroy network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.163612 env[1110]: time="2023-10-02T19:26:37.163574167Z" level=error msg="encountered an error cleaning up failed sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.163673 env[1110]: time="2023-10-02T19:26:37.163627337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8c5qr,Uid:abf6e2c9-193c-4296-8247-02d6e5da6ae3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.163968 kubelet[1416]: E1002 19:26:37.163932 1416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.164056 kubelet[1416]: E1002 19:26:37.164008 1416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:26:37.164056 kubelet[1416]: E1002 19:26:37.164031 1416 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:26:37.164121 kubelet[1416]: E1002 19:26:37.164093 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-8c5qr_kube-system(abf6e2c9-193c-4296-8247-02d6e5da6ae3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-8c5qr_kube-system(abf6e2c9-193c-4296-8247-02d6e5da6ae3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8c5qr" podUID="abf6e2c9-193c-4296-8247-02d6e5da6ae3" Oct 2 19:26:37.200596 env[1110]: time="2023-10-02T19:26:37.200532407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-75kzt,Uid:b0822001-b43f-4855-b401-678c43b136af,Namespace:calico-system,Attempt:0,}" Oct 2 19:26:37.233135 kubelet[1416]: I1002 19:26:37.231838 1416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:37.233135 kubelet[1416]: I1002 19:26:37.232904 1416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:37.233362 env[1110]: time="2023-10-02T19:26:37.232522450Z" level=info msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" Oct 2 19:26:37.237007 kubelet[1416]: E1002 19:26:37.236965 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:37.237867 env[1110]: time="2023-10-02T19:26:37.237824273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.25.0\"" Oct 2 19:26:37.238512 kubelet[1416]: I1002 19:26:37.238264 1416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:37.239191 env[1110]: time="2023-10-02T19:26:37.238884731Z" level=info msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" Oct 2 19:26:37.239769 env[1110]: time="2023-10-02T19:26:37.239727081Z" level=info msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" Oct 2 19:26:37.267324 env[1110]: time="2023-10-02T19:26:37.267241220Z" level=error msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" failed" error="failed to destroy network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.267936 kubelet[1416]: E1002 19:26:37.267723 1416 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:37.267936 kubelet[1416]: E1002 19:26:37.267832 1416 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b"} Oct 2 19:26:37.267936 kubelet[1416]: E1002 19:26:37.267867 1416 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76dde907-d81f-4af1-8608-00e5081994e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:26:37.267936 kubelet[1416]: E1002 19:26:37.267898 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76dde907-d81f-4af1-8608-00e5081994e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" podUID="76dde907-d81f-4af1-8608-00e5081994e4" Oct 2 19:26:37.275102 env[1110]: time="2023-10-02T19:26:37.275019657Z" level=error msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" failed" error="failed to destroy network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.275832 kubelet[1416]: E1002 19:26:37.275649 1416 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:37.275832 kubelet[1416]: E1002 19:26:37.275703 1416 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479"} Oct 2 19:26:37.275832 kubelet[1416]: E1002 19:26:37.275748 1416 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abf6e2c9-193c-4296-8247-02d6e5da6ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:26:37.275832 kubelet[1416]: E1002 19:26:37.275813 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abf6e2c9-193c-4296-8247-02d6e5da6ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8c5qr" podUID="abf6e2c9-193c-4296-8247-02d6e5da6ae3" Oct 2 19:26:37.277722 env[1110]: time="2023-10-02T19:26:37.277664748Z" level=error msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" failed" error="failed to destroy network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.278094 kubelet[1416]: E1002 19:26:37.278062 1416 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:37.278152 kubelet[1416]: E1002 19:26:37.278121 1416 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c"} Oct 2 19:26:37.278197 kubelet[1416]: E1002 19:26:37.278161 1416 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ac60edc-a9a5-4566-a663-7a49486a549a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:26:37.278429 kubelet[1416]: E1002 19:26:37.278401 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ac60edc-a9a5-4566-a663-7a49486a549a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kq6xj" podUID="0ac60edc-a9a5-4566-a663-7a49486a549a" Oct 2 19:26:37.318489 env[1110]: time="2023-10-02T19:26:37.318413406Z" level=error msg="Failed to destroy network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.318781 env[1110]: time="2023-10-02T19:26:37.318746380Z" level=error msg="encountered an error cleaning up failed sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.318833 env[1110]: time="2023-10-02T19:26:37.318812254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-75kzt,Uid:b0822001-b43f-4855-b401-678c43b136af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.319165 kubelet[1416]: E1002 19:26:37.319116 1416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:37.319232 kubelet[1416]: E1002 19:26:37.319197 1416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:37.319232 kubelet[1416]: E1002 19:26:37.319220 1416 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:26:37.319297 kubelet[1416]: E1002 19:26:37.319283 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:26:37.441125 kubelet[1416]: I1002 19:26:37.441068 1416 topology_manager.go:215] "Topology Admit Handler" podUID="d465276a-936e-4514-bd15-fe7cf64b503d" podNamespace="default" podName="nginx-deployment-6d5f899847-nztm7" Oct 2 19:26:37.458083 systemd[1]: Created slice kubepods-besteffort-podd465276a_936e_4514_bd15_fe7cf64b503d.slice. Oct 2 19:26:37.471850 kubelet[1416]: I1002 19:26:37.471770 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dplbr\" (UniqueName: \"kubernetes.io/projected/d465276a-936e-4514-bd15-fe7cf64b503d-kube-api-access-dplbr\") pod \"nginx-deployment-6d5f899847-nztm7\" (UID: \"d465276a-936e-4514-bd15-fe7cf64b503d\") " pod="default/nginx-deployment-6d5f899847-nztm7" Oct 2 19:26:37.675709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b-shm.mount: Deactivated successfully. Oct 2 19:26:37.675838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c-shm.mount: Deactivated successfully. Oct 2 19:26:37.835637 kubelet[1416]: E1002 19:26:37.835563 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:38.061130 env[1110]: time="2023-10-02T19:26:38.060991986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nztm7,Uid:d465276a-936e-4514-bd15-fe7cf64b503d,Namespace:default,Attempt:0,}" Oct 2 19:26:38.241292 kubelet[1416]: I1002 19:26:38.241251 1416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:38.242060 env[1110]: time="2023-10-02T19:26:38.242005511Z" level=info msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" Oct 2 19:26:38.249288 env[1110]: time="2023-10-02T19:26:38.249225721Z" level=error msg="Failed to destroy network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:38.250645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b-shm.mount: Deactivated successfully. Oct 2 19:26:38.251439 env[1110]: time="2023-10-02T19:26:38.251401882Z" level=error msg="encountered an error cleaning up failed sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:38.251508 env[1110]: time="2023-10-02T19:26:38.251462446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nztm7,Uid:d465276a-936e-4514-bd15-fe7cf64b503d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:38.251826 kubelet[1416]: E1002 19:26:38.251752 1416 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:38.251826 kubelet[1416]: E1002 19:26:38.251823 1416 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-nztm7" Oct 2 19:26:38.252041 kubelet[1416]: E1002 19:26:38.251845 1416 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-nztm7" Oct 2 19:26:38.252041 kubelet[1416]: E1002 19:26:38.251911 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-nztm7_default(d465276a-936e-4514-bd15-fe7cf64b503d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-nztm7_default(d465276a-936e-4514-bd15-fe7cf64b503d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-nztm7" podUID="d465276a-936e-4514-bd15-fe7cf64b503d" Oct 2 19:26:38.265581 env[1110]: time="2023-10-02T19:26:38.265505462Z" level=error msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" failed" error="failed to destroy network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:38.265892 kubelet[1416]: E1002 19:26:38.265858 1416 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:38.265976 kubelet[1416]: E1002 19:26:38.265908 1416 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2"} Oct 2 19:26:38.265976 kubelet[1416]: E1002 19:26:38.265943 1416 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0822001-b43f-4855-b401-678c43b136af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:26:38.265976 kubelet[1416]: E1002 19:26:38.265975 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0822001-b43f-4855-b401-678c43b136af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:26:38.836079 kubelet[1416]: E1002 19:26:38.836026 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:39.244817 kubelet[1416]: I1002 19:26:39.244765 1416 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:39.245417 env[1110]: time="2023-10-02T19:26:39.245359307Z" level=info msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" Oct 2 19:26:39.269236 env[1110]: time="2023-10-02T19:26:39.269168258Z" level=error msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" failed" error="failed to destroy network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 2 19:26:39.269549 kubelet[1416]: E1002 19:26:39.269502 1416 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:39.269549 kubelet[1416]: E1002 19:26:39.269543 1416 kuberuntime_manager.go:1315] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b"} Oct 2 19:26:39.269758 kubelet[1416]: E1002 19:26:39.269578 1416 kuberuntime_manager.go:1028] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d465276a-936e-4514-bd15-fe7cf64b503d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 2 19:26:39.269758 kubelet[1416]: E1002 19:26:39.269604 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d465276a-936e-4514-bd15-fe7cf64b503d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-nztm7" podUID="d465276a-936e-4514-bd15-fe7cf64b503d" Oct 2 19:26:39.836962 kubelet[1416]: E1002 19:26:39.836834 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:40.837767 kubelet[1416]: E1002 19:26:40.837705 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:41.837902 kubelet[1416]: E1002 19:26:41.837833 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:42.838465 kubelet[1416]: E1002 19:26:42.838382 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:43.481929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195256533.mount: Deactivated successfully. Oct 2 19:26:43.839114 kubelet[1416]: E1002 19:26:43.838939 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:44.839809 kubelet[1416]: E1002 19:26:44.839745 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:45.685916 env[1110]: time="2023-10-02T19:26:45.685835142Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:45.692444 env[1110]: time="2023-10-02T19:26:45.692401062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:45.694171 env[1110]: time="2023-10-02T19:26:45.694131449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:45.698846 env[1110]: time="2023-10-02T19:26:45.698772710Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e898f4b7b55c908c88dad008ae939024e71ed93c5effbb10cca891b658b2f001,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:45.699282 env[1110]: time="2023-10-02T19:26:45.699241436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.25.0\" returns image reference \"sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e\"" Oct 2 19:26:45.701634 env[1110]: time="2023-10-02T19:26:45.701603270Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 2 19:26:45.716918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681417520.mount: Deactivated successfully. Oct 2 19:26:45.724060 env[1110]: time="2023-10-02T19:26:45.723996740Z" level=info msg="CreateContainer within sandbox \"b5ba4de29f917302a848e75d8d0aedc10df42e7b4ca1710f98888ad5b404115b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3\"" Oct 2 19:26:45.724761 env[1110]: time="2023-10-02T19:26:45.724714532Z" level=info msg="StartContainer for \"a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3\"" Oct 2 19:26:45.755699 systemd[1]: Started cri-containerd-a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3.scope. Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.799308 kernel: kauditd_printk_skb: 34 callbacks suppressed Oct 2 19:26:45.799369 kernel: audit: type=1400 audit(1696274805.797:661): avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=8 items=0 ppid=1479 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:45.804317 kernel: audit: type=1300 audit(1696274805.797:661): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=8 items=0 ppid=1479 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:45.804361 kernel: audit: type=1327 audit(1696274805.797:661): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616232326131633761353635616562623130306232353538393437 Oct 2 19:26:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616232326131633761353635616562623130306232353538393437 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.809160 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.809302 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.813371 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.813422 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.820680 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.820715 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.820747 kernel: audit: type=1400 audit(1696274805.797:662): avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.797000 audit: BPF prog-id=81 op=LOAD Oct 2 19:26:45.797000 audit[2163]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c00031a2a0 items=0 ppid=1479 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:45.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616232326131633761353635616562623130306232353538393437 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.800000 audit: BPF prog-id=82 op=LOAD Oct 2 19:26:45.800000 audit[2163]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c00031a2e8 items=0 ppid=1479 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:45.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616232326131633761353635616562623130306232353538393437 Oct 2 19:26:45.803000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:26:45.803000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { perfmon } for pid=2163 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit[2163]: AVC avc: denied { bpf } for pid=2163 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:45.803000 audit: BPF prog-id=83 op=LOAD Oct 2 19:26:45.803000 audit[2163]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c00031a378 items=0 ppid=1479 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:45.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131616232326131633761353635616562623130306232353538393437 Oct 2 19:26:45.839953 kubelet[1416]: E1002 19:26:45.839866 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:45.962560 env[1110]: time="2023-10-02T19:26:45.962412740Z" level=info msg="StartContainer for \"a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3\" returns successfully" Oct 2 19:26:46.025585 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 2 19:26:46.025755 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 2 19:26:46.263764 kubelet[1416]: E1002 19:26:46.263645 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:46.713951 kubelet[1416]: I1002 19:26:46.713898 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-6pn5j" podStartSLOduration=3.244650664 podCreationTimestamp="2023-10-02 19:26:16 +0000 UTC" firstStartedPulling="2023-10-02 19:26:18.23037499 +0000 UTC m=+3.764089001" lastFinishedPulling="2023-10-02 19:26:45.699579171 +0000 UTC m=+31.233293182" observedRunningTime="2023-10-02 19:26:46.713662868 +0000 UTC m=+32.247376910" watchObservedRunningTime="2023-10-02 19:26:46.713854845 +0000 UTC m=+32.247568856" Oct 2 19:26:46.714470 systemd[1]: run-containerd-runc-k8s.io-a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3-runc.zHPlFb.mount: Deactivated successfully. Oct 2 19:26:46.840093 kubelet[1416]: E1002 19:26:46.840029 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:47.120227 kubelet[1416]: I1002 19:26:47.120154 1416 topology_manager.go:215] "Topology Admit Handler" podUID="cdcab79d-5ddb-4900-9b9c-6f6ae31bf773" podNamespace="tigera-operator" podName="tigera-operator-8547bd6cc6-zx7vw" Oct 2 19:26:47.124823 systemd[1]: Created slice kubepods-besteffort-podcdcab79d_5ddb_4900_9b9c_6f6ae31bf773.slice. Oct 2 19:26:47.225641 kubelet[1416]: I1002 19:26:47.225587 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcqgn\" (UniqueName: \"kubernetes.io/projected/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-kube-api-access-bcqgn\") pod \"tigera-operator-8547bd6cc6-zx7vw\" (UID: \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\") " pod="tigera-operator/tigera-operator-8547bd6cc6-zx7vw" Oct 2 19:26:47.225641 kubelet[1416]: I1002 19:26:47.225650 1416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-var-lib-calico\") pod \"tigera-operator-8547bd6cc6-zx7vw\" (UID: \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\") " pod="tigera-operator/tigera-operator-8547bd6cc6-zx7vw" Oct 2 19:26:47.265639 kubelet[1416]: E1002 19:26:47.265606 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:47.427684 env[1110]: time="2023-10-02T19:26:47.427555393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8547bd6cc6-zx7vw,Uid:cdcab79d-5ddb-4900-9b9c-6f6ae31bf773,Namespace:tigera-operator,Attempt:0,}" Oct 2 19:26:47.454383 env[1110]: time="2023-10-02T19:26:47.454308389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:47.454383 env[1110]: time="2023-10-02T19:26:47.454351893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:47.454383 env[1110]: time="2023-10-02T19:26:47.454362943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:47.454640 env[1110]: time="2023-10-02T19:26:47.454510144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad pid=2276 runtime=io.containerd.runc.v2 Oct 2 19:26:47.469581 systemd[1]: Started cri-containerd-388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad.scope. Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit: BPF prog-id=84 op=LOAD Oct 2 19:26:47.481000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.481000 audit[2286]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2276 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338386432323161353331303663346165313330626661383539666662 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2276 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338386432323161353331303663346165313330626661383539666662 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit: BPF prog-id=85 op=LOAD Oct 2 19:26:47.482000 audit[2286]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0003dc6f0 items=0 ppid=2276 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338386432323161353331303663346165313330626661383539666662 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit: BPF prog-id=86 op=LOAD Oct 2 19:26:47.482000 audit[2286]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0003dc738 items=0 ppid=2276 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338386432323161353331303663346165313330626661383539666662 Oct 2 19:26:47.482000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:26:47.482000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { perfmon } for pid=2286 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit[2286]: AVC avc: denied { bpf } for pid=2286 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:47.482000 audit: BPF prog-id=87 op=LOAD Oct 2 19:26:47.482000 audit[2286]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0003dcb48 items=0 ppid=2276 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:47.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338386432323161353331303663346165313330626661383539666662 Oct 2 19:26:47.508460 env[1110]: time="2023-10-02T19:26:47.508407562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8547bd6cc6-zx7vw,Uid:cdcab79d-5ddb-4900-9b9c-6f6ae31bf773,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\"" Oct 2 19:26:47.510134 env[1110]: time="2023-10-02T19:26:47.510099990Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.29.0\"" Oct 2 19:26:47.841037 kubelet[1416]: E1002 19:26:47.840891 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:47.917402 env[1110]: time="2023-10-02T19:26:47.917328586Z" level=info msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.952 [INFO][2325] k8s.go 576: Cleaning up netns ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.952 [INFO][2325] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" iface="eth0" netns="/var/run/netns/cni-394b84cb-fe5b-19bf-5594-bef19549d0be" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.953 [INFO][2325] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" iface="eth0" netns="/var/run/netns/cni-394b84cb-fe5b-19bf-5594-bef19549d0be" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.953 [INFO][2325] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" iface="eth0" netns="/var/run/netns/cni-394b84cb-fe5b-19bf-5594-bef19549d0be" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.953 [INFO][2325] k8s.go 583: Releasing IP address(es) ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.953 [INFO][2325] utils.go 196: Calico CNI releasing IP address ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.976 [INFO][2333] ipam_plugin.go 416: Releasing address using handleID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:47.988522 env[1110]: time="2023-10-02T19:26:47Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:47.988522 env[1110]: time="2023-10-02T19:26:47Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.982 [WARNING][2333] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.982 [INFO][2333] ipam_plugin.go 444: Releasing address using workloadID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:47.988522 env[1110]: time="2023-10-02T19:26:47Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:47.988522 env[1110]: 2023-10-02 19:26:47.987 [INFO][2325] k8s.go 589: Teardown processing complete. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:26:47.988987 env[1110]: time="2023-10-02T19:26:47.988656934Z" level=info msg="TearDown network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" successfully" Oct 2 19:26:47.988987 env[1110]: time="2023-10-02T19:26:47.988702931Z" level=info msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" returns successfully" Oct 2 19:26:47.989852 systemd[1]: run-netns-cni\x2d394b84cb\x2dfe5b\x2d19bf\x2d5594\x2dbef19549d0be.mount: Deactivated successfully. Oct 2 19:26:47.990590 kubelet[1416]: E1002 19:26:47.990558 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:47.991103 env[1110]: time="2023-10-02T19:26:47.991063583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kq6xj,Uid:0ac60edc-a9a5-4566-a663-7a49486a549a,Namespace:kube-system,Attempt:1,}" Oct 2 19:26:48.036000 audit[2402]: AVC avc: denied { write } for pid=2402 comm="tee" name="fd" dev="proc" ino=19898 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.037000 audit[2412]: AVC avc: denied { write } for pid=2412 comm="tee" name="fd" dev="proc" ino=19245 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.037000 audit[2412]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0148397f a2=241 a3=1b6 items=1 ppid=2361 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.037000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 2 19:26:48.037000 audit: PATH item=0 name="/dev/fd/63" inode=19895 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.036000 audit[2402]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed89ea980 a2=241 a3=1b6 items=1 ppid=2362 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.036000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 2 19:26:48.036000 audit: PATH item=0 name="/dev/fd/63" inode=19892 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.048000 audit[2422]: AVC avc: denied { write } for pid=2422 comm="tee" name="fd" dev="proc" ino=19915 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.050000 audit[2429]: AVC avc: denied { write } for pid=2429 comm="tee" name="fd" dev="proc" ino=19918 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.051000 audit[2431]: AVC avc: denied { write } for pid=2431 comm="tee" name="fd" dev="proc" ino=19921 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.052000 audit[2435]: AVC avc: denied { write } for pid=2435 comm="tee" name="fd" dev="proc" ino=19924 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.048000 audit[2422]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2dcb1991 a2=241 a3=1b6 items=1 ppid=2363 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.048000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 2 19:26:48.048000 audit: PATH item=0 name="/dev/fd/63" inode=20870 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.050000 audit[2429]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce32e498f a2=241 a3=1b6 items=1 ppid=2367 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.050000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 2 19:26:48.050000 audit: PATH item=0 name="/dev/fd/63" inode=19907 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.051000 audit[2431]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff635b3990 a2=241 a3=1b6 items=1 ppid=2368 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.051000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 2 19:26:48.051000 audit: PATH item=0 name="/dev/fd/63" inode=19910 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.052000 audit[2435]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb32b398f a2=241 a3=1b6 items=1 ppid=2374 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.052000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 2 19:26:48.052000 audit: PATH item=0 name="/dev/fd/63" inode=19912 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.071000 audit[2424]: AVC avc: denied { write } for pid=2424 comm="tee" name="fd" dev="proc" ino=19931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 2 19:26:48.071000 audit[2424]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2e2f698f a2=241 a3=1b6 items=1 ppid=2370 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.071000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 2 19:26:48.071000 audit: PATH item=0 name="/dev/fd/63" inode=19904 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:26:48.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 2 19:26:48.212726 systemd-networkd[1011]: cali283fb31178f: Link UP Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.032 [INFO][2341] utils.go 108: File /var/lib/calico/mtu does not exist Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.049 [INFO][2341] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0 coredns-5dd5756b68- kube-system 0ac60edc-a9a5-4566-a663-7a49486a549a 887 0 2023-10-02 19:26:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.19 coredns-5dd5756b68-kq6xj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali283fb31178f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.049 [INFO][2341] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.126 [INFO][2448] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" HandleID="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.141 [INFO][2448] ipam_plugin.go 269: Auto assigning IP ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" HandleID="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004dee0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.19", "pod":"coredns-5dd5756b68-kq6xj", "timestamp":"2023-10-02 19:26:48.124689403 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:26:48.220467 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:48.220467 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.141 [INFO][2448] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.146 [INFO][2448] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.154 [INFO][2448] ipam.go 372: Looking up existing affinities for host host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.158 [INFO][2448] ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.165 [INFO][2448] ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.168 [INFO][2448] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.168 [INFO][2448] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.171 [INFO][2448] ipam.go 1682: Creating new handle: k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.175 [INFO][2448] ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.180 [INFO][2448] ipam.go 1216: Successfully claimed IPs: [192.168.37.1/26] block=192.168.37.0/26 handle="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.181 [INFO][2448] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.1/26] handle="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" host="10.0.0.19" Oct 2 19:26:48.220467 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:48.220467 env[1110]: 2023-10-02 19:26:48.181 [INFO][2448] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.37.1/26] IPv6=[] ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" HandleID="k8s-pod-network.24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.183 [INFO][2341] k8s.go 383: Populated endpoint ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0ac60edc-a9a5-4566-a663-7a49486a549a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"coredns-5dd5756b68-kq6xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283fb31178f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.183 [INFO][2341] k8s.go 384: Calico CNI using IPs: [192.168.37.1/32] ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.183 [INFO][2341] dataplane_linux.go 68: Setting the host side veth name to cali283fb31178f ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.206 [INFO][2341] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.212 [INFO][2341] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0ac60edc-a9a5-4566-a663-7a49486a549a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c", Pod:"coredns-5dd5756b68-kq6xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283fb31178f", MAC:"92:e8:96:53:3a:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:48.221158 env[1110]: 2023-10-02 19:26:48.218 [INFO][2341] k8s.go 489: Wrote updated endpoint to datastore ContainerID="24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c" Namespace="kube-system" Pod="coredns-5dd5756b68-kq6xj" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:26:48.240494 env[1110]: time="2023-10-02T19:26:48.240408002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:48.240656 env[1110]: time="2023-10-02T19:26:48.240494557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:48.240656 env[1110]: time="2023-10-02T19:26:48.240529063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:48.240750 env[1110]: time="2023-10-02T19:26:48.240707863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c pid=2508 runtime=io.containerd.runc.v2 Oct 2 19:26:48.259355 systemd[1]: Started cri-containerd-24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c.scope. Oct 2 19:26:48.265980 kernel: Initializing XFRM netlink socket Oct 2 19:26:48.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit: BPF prog-id=88 op=LOAD Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333137643061346338373838353539366130653231323035666536 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333137643061346338373838353539366130653231323035666536 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit: BPF prog-id=89 op=LOAD Oct 2 19:26:48.286000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000229b20 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333137643061346338373838353539366130653231323035666536 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.286000 audit: BPF prog-id=90 op=LOAD Oct 2 19:26:48.286000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000229b68 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333137643061346338373838353539366130653231323035666536 Oct 2 19:26:48.286000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:26:48.287000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { perfmon } for pid=2520 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit[2520]: AVC avc: denied { bpf } for pid=2520 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.287000 audit: BPF prog-id=91 op=LOAD Oct 2 19:26:48.287000 audit[2520]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000229f78 items=0 ppid=2508 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333137643061346338373838353539366130653231323035666536 Oct 2 19:26:48.288415 systemd-resolved[1056]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:26:48.317086 env[1110]: time="2023-10-02T19:26:48.316993283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kq6xj,Uid:0ac60edc-a9a5-4566-a663-7a49486a549a,Namespace:kube-system,Attempt:1,} returns sandbox id \"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c\"" Oct 2 19:26:48.320295 kubelet[1416]: E1002 19:26:48.320263 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit: BPF prog-id=92 op=LOAD Oct 2 19:26:48.350000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd000a290 a2=70 a3=7f0b98ebd000 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.350000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit: BPF prog-id=93 op=LOAD Oct 2 19:26:48.350000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd000a290 a2=70 a3=6e items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.350000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffd000a240 a2=70 a3=7fffd000a290 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.350000 audit: BPF prog-id=94 op=LOAD Oct 2 19:26:48.350000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd000a220 a2=70 a3=7fffd000a290 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.350000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.351000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd000a300 a2=70 a3=0 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd000a2f0 a2=70 a3=0 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffd000a330 a2=70 a3=0 items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { perfmon } for pid=2576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit[2576]: AVC avc: denied { bpf } for pid=2576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.351000 audit: BPF prog-id=95 op=LOAD Oct 2 19:26:48.351000 audit[2576]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd000a250 a2=70 a3=ffffffff items=0 ppid=2373 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 2 19:26:48.354000 audit[2580]: AVC avc: denied { bpf } for pid=2580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.354000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff79d11c10 a2=70 a3=208 items=0 ppid=2373 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.354000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 2 19:26:48.354000 audit[2580]: AVC avc: denied { bpf } for pid=2580 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:48.354000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff79d11ae0 a2=70 a3=3 items=0 ppid=2373 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.354000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 2 19:26:48.363000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:26:48.438000 audit[2606]: NETFILTER_CFG table=mangle:67 family=2 entries=19 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:48.438000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffd4669e990 a2=0 a3=7ffd4669e97c items=0 ppid=2373 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.438000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:48.442000 audit[2607]: NETFILTER_CFG table=raw:68 family=2 entries=19 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:48.442000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffdaa491450 a2=0 a3=55f70fbca000 items=0 ppid=2373 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.442000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:48.443000 audit[2608]: NETFILTER_CFG table=nat:69 family=2 entries=16 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:48.443000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffd3ea1a120 a2=0 a3=55e402ebd000 items=0 ppid=2373 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.443000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:48.444000 audit[2609]: NETFILTER_CFG table=filter:70 family=2 entries=71 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:48.444000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=36636 a0=3 a1=7fff65992ab0 a2=0 a3=564a851e1000 items=0 ppid=2373 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:48.444000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:48.841444 kubelet[1416]: E1002 19:26:48.841391 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:48.913585 env[1110]: time="2023-10-02T19:26:48.913535860Z" level=info msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.954 [INFO][2632] k8s.go 576: Cleaning up netns ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.954 [INFO][2632] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" iface="eth0" netns="/var/run/netns/cni-77152a53-c54b-46f0-7d11-401dc09d9057" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.955 [INFO][2632] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" iface="eth0" netns="/var/run/netns/cni-77152a53-c54b-46f0-7d11-401dc09d9057" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.955 [INFO][2632] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" iface="eth0" netns="/var/run/netns/cni-77152a53-c54b-46f0-7d11-401dc09d9057" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.955 [INFO][2632] k8s.go 583: Releasing IP address(es) ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.955 [INFO][2632] utils.go 196: Calico CNI releasing IP address ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.976 [INFO][2640] ipam_plugin.go 416: Releasing address using handleID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:48.985876 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:48.985876 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.983 [WARNING][2640] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.983 [INFO][2640] ipam_plugin.go 444: Releasing address using workloadID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:48.985876 env[1110]: time="2023-10-02T19:26:48Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:48.985876 env[1110]: 2023-10-02 19:26:48.984 [INFO][2632] k8s.go 589: Teardown processing complete. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:26:48.986354 env[1110]: time="2023-10-02T19:26:48.986009027Z" level=info msg="TearDown network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" successfully" Oct 2 19:26:48.986354 env[1110]: time="2023-10-02T19:26:48.986045668Z" level=info msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" returns successfully" Oct 2 19:26:48.986770 env[1110]: time="2023-10-02T19:26:48.986742114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-g8t2d,Uid:76dde907-d81f-4af1-8608-00e5081994e4,Namespace:calico-system,Attempt:1,}" Oct 2 19:26:48.987372 systemd[1]: run-netns-cni\x2d77152a53\x2dc54b\x2d46f0\x2d7d11\x2d401dc09d9057.mount: Deactivated successfully. Oct 2 19:26:48.993012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200874188.mount: Deactivated successfully. Oct 2 19:26:49.098818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali899a2c2ae2c: link becomes ready Oct 2 19:26:49.102486 systemd-networkd[1011]: cali899a2c2ae2c: Link UP Oct 2 19:26:49.102490 systemd-networkd[1011]: cali899a2c2ae2c: Gained carrier Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.036 [INFO][2647] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0 calico-kube-controllers-74b9887bb6- calico-system 76dde907-d81f-4af1-8608-00e5081994e4 899 0 2023-10-02 19:26:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b9887bb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.19 calico-kube-controllers-74b9887bb6-g8t2d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali899a2c2ae2c [] []}} ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.036 [INFO][2647] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.067 [INFO][2664] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" HandleID="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.076 [INFO][2664] ipam_plugin.go 269: Auto assigning IP ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" HandleID="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000190d80), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.19", "pod":"calico-kube-controllers-74b9887bb6-g8t2d", "timestamp":"2023-10-02 19:26:49.067111372 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:26:49.110846 env[1110]: time="2023-10-02T19:26:49Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:49.110846 env[1110]: time="2023-10-02T19:26:49Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.076 [INFO][2664] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.077 [INFO][2664] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.080 [INFO][2664] ipam.go 372: Looking up existing affinities for host host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.084 [INFO][2664] ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.085 [INFO][2664] ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.087 [INFO][2664] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.087 [INFO][2664] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.088 [INFO][2664] ipam.go 1682: Creating new handle: k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4 Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.091 [INFO][2664] ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.094 [INFO][2664] ipam.go 1216: Successfully claimed IPs: [192.168.37.2/26] block=192.168.37.0/26 handle="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.094 [INFO][2664] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.2/26] handle="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" host="10.0.0.19" Oct 2 19:26:49.110846 env[1110]: time="2023-10-02T19:26:49Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:49.110846 env[1110]: 2023-10-02 19:26:49.094 [INFO][2664] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.37.2/26] IPv6=[] ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" HandleID="k8s-pod-network.8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.096 [INFO][2647] k8s.go 383: Populated endpoint ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"76dde907-d81f-4af1-8608-00e5081994e4", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"calico-kube-controllers-74b9887bb6-g8t2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899a2c2ae2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.096 [INFO][2647] k8s.go 384: Calico CNI using IPs: [192.168.37.2/32] ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.096 [INFO][2647] dataplane_linux.go 68: Setting the host side veth name to cali899a2c2ae2c ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.098 [INFO][2647] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.102 [INFO][2647] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"76dde907-d81f-4af1-8608-00e5081994e4", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4", Pod:"calico-kube-controllers-74b9887bb6-g8t2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899a2c2ae2c", MAC:"5e:5f:70:05:0e:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:49.111554 env[1110]: 2023-10-02 19:26:49.108 [INFO][2647] k8s.go 489: Wrote updated endpoint to datastore ContainerID="8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4" Namespace="calico-system" Pod="calico-kube-controllers-74b9887bb6-g8t2d" WorkloadEndpoint="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:26:49.126582 env[1110]: time="2023-10-02T19:26:49.126508040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:49.126582 env[1110]: time="2023-10-02T19:26:49.126546533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:49.126582 env[1110]: time="2023-10-02T19:26:49.126556712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:49.126820 env[1110]: time="2023-10-02T19:26:49.126690096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4 pid=2692 runtime=io.containerd.runc.v2 Oct 2 19:26:49.138767 systemd[1]: Started cri-containerd-8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4.scope. Oct 2 19:26:49.170000 audit[2714]: NETFILTER_CFG table=filter:71 family=2 entries=40 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:49.170000 audit[2714]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffde847af40 a2=0 a3=7ffde847af2c items=0 ppid=2373 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.170000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.179000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit: BPF prog-id=96 op=LOAD Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2692 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861383739376330633362356534383664336162353565383535613066 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2692 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861383739376330633362356534383664336162353565383535613066 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.180000 audit: BPF prog-id=97 op=LOAD Oct 2 19:26:49.180000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c0003b0720 items=0 ppid=2692 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861383739376330633362356534383664336162353565383535613066 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit: BPF prog-id=98 op=LOAD Oct 2 19:26:49.181000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c0003b0768 items=0 ppid=2692 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861383739376330633362356534383664336162353565383535613066 Oct 2 19:26:49.181000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:26:49.181000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { perfmon } for pid=2701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit[2701]: AVC avc: denied { bpf } for pid=2701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:49.181000 audit: BPF prog-id=99 op=LOAD Oct 2 19:26:49.181000 audit[2701]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0003b0b78 items=0 ppid=2692 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:49.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861383739376330633362356534383664336162353565383535613066 Oct 2 19:26:49.182511 systemd-resolved[1056]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:26:49.206535 env[1110]: time="2023-10-02T19:26:49.206475286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b9887bb6-g8t2d,Uid:76dde907-d81f-4af1-8608-00e5081994e4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4\"" Oct 2 19:26:49.219982 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali283fb31178f: link becomes ready Oct 2 19:26:49.219751 systemd-networkd[1011]: cali283fb31178f: Gained carrier Oct 2 19:26:49.276628 systemd-networkd[1011]: vxlan.calico: Link UP Oct 2 19:26:49.276635 systemd-networkd[1011]: vxlan.calico: Gained carrier Oct 2 19:26:49.842537 kubelet[1416]: E1002 19:26:49.842496 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:49.982740 env[1110]: time="2023-10-02T19:26:49.982657412Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.29.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:49.984452 env[1110]: time="2023-10-02T19:26:49.984404347Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:343ea4f89a32c8f197173c5d9f1ad64eb033df452c5b89a65877d8d3cfa692b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:49.985980 env[1110]: time="2023-10-02T19:26:49.985955869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.29.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:49.987414 env[1110]: time="2023-10-02T19:26:49.987388756Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:49.987983 env[1110]: time="2023-10-02T19:26:49.987953551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.29.0\" returns image reference \"sha256:343ea4f89a32c8f197173c5d9f1ad64eb033df452c5b89a65877d8d3cfa692b1\"" Oct 2 19:26:49.988815 env[1110]: time="2023-10-02T19:26:49.988765276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Oct 2 19:26:49.989542 env[1110]: time="2023-10-02T19:26:49.989515594Z" level=info msg="CreateContainer within sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 2 19:26:50.000615 env[1110]: time="2023-10-02T19:26:50.000577956Z" level=info msg="CreateContainer within sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\"" Oct 2 19:26:50.000935 env[1110]: time="2023-10-02T19:26:50.000904898Z" level=info msg="StartContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\"" Oct 2 19:26:50.020801 systemd[1]: Started cri-containerd-9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505.scope. Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit: BPF prog-id=100 op=LOAD Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2276 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:50.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393037333033383965366637643337653537313830306539326361 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2276 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:50.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393037333033383965366637643337653537313830306539326361 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit: BPF prog-id=101 op=LOAD Oct 2 19:26:50.035000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001d9110 items=0 ppid=2276 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:50.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393037333033383965366637643337653537313830306539326361 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.035000 audit: BPF prog-id=102 op=LOAD Oct 2 19:26:50.035000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001d9158 items=0 ppid=2276 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:50.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393037333033383965366637643337653537313830306539326361 Oct 2 19:26:50.036000 audit: BPF prog-id=102 op=UNLOAD Oct 2 19:26:50.036000 audit: BPF prog-id=101 op=UNLOAD Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { perfmon } for pid=2747 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit[2747]: AVC avc: denied { bpf } for pid=2747 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:50.036000 audit: BPF prog-id=103 op=LOAD Oct 2 19:26:50.036000 audit[2747]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001d9568 items=0 ppid=2276 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:50.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393037333033383965366637643337653537313830306539326361 Oct 2 19:26:50.051941 env[1110]: time="2023-10-02T19:26:50.051907593Z" level=info msg="StartContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" returns successfully" Oct 2 19:26:50.282232 kubelet[1416]: I1002 19:26:50.282192 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8547bd6cc6-zx7vw" podStartSLOduration=0.803487367 podCreationTimestamp="2023-10-02 19:26:47 +0000 UTC" firstStartedPulling="2023-10-02 19:26:47.509599555 +0000 UTC m=+33.043313567" lastFinishedPulling="2023-10-02 19:26:49.988264082 +0000 UTC m=+35.521978083" observedRunningTime="2023-10-02 19:26:50.281943447 +0000 UTC m=+35.815657468" watchObservedRunningTime="2023-10-02 19:26:50.282151883 +0000 UTC m=+35.815865894" Oct 2 19:26:50.283012 systemd-networkd[1011]: cali899a2c2ae2c: Gained IPv6LL Oct 2 19:26:50.714988 systemd[1]: run-containerd-runc-k8s.io-9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505-runc.Ba9fmK.mount: Deactivated successfully. Oct 2 19:26:50.730197 systemd-networkd[1011]: cali283fb31178f: Gained IPv6LL Oct 2 19:26:50.757503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452217053.mount: Deactivated successfully. Oct 2 19:26:50.843417 kubelet[1416]: E1002 19:26:50.843348 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:50.857958 systemd-networkd[1011]: vxlan.calico: Gained IPv6LL Oct 2 19:26:50.914385 env[1110]: time="2023-10-02T19:26:50.914331531Z" level=info msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" Oct 2 19:26:50.914385 env[1110]: time="2023-10-02T19:26:50.914342192Z" level=info msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] k8s.go 576: Cleaning up netns ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" iface="eth0" netns="/var/run/netns/cni-ef700df7-526d-c505-8993-737c01b2bae1" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" iface="eth0" netns="/var/run/netns/cni-ef700df7-526d-c505-8993-737c01b2bae1" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" iface="eth0" netns="/var/run/netns/cni-ef700df7-526d-c505-8993-737c01b2bae1" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] k8s.go 583: Releasing IP address(es) ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:50.980 [INFO][2832] utils.go 196: Calico CNI releasing IP address ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:51.036 [INFO][2846] ipam_plugin.go 416: Releasing address using handleID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.046360 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:51.046360 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:51.043 [WARNING][2846] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:51.043 [INFO][2846] ipam_plugin.go 444: Releasing address using workloadID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.046360 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:51.046360 env[1110]: 2023-10-02 19:26:51.045 [INFO][2832] k8s.go 589: Teardown processing complete. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:26:51.047167 env[1110]: time="2023-10-02T19:26:51.047124240Z" level=info msg="TearDown network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" successfully" Oct 2 19:26:51.047167 env[1110]: time="2023-10-02T19:26:51.047165258Z" level=info msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" returns successfully" Oct 2 19:26:51.048673 systemd[1]: run-netns-cni\x2def700df7\x2d526d\x2dc505\x2d8993\x2d737c01b2bae1.mount: Deactivated successfully. Oct 2 19:26:51.049889 env[1110]: time="2023-10-02T19:26:51.049773684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nztm7,Uid:d465276a-936e-4514-bd15-fe7cf64b503d,Namespace:default,Attempt:1,}" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.975 [INFO][2820] k8s.go 576: Cleaning up netns ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.976 [INFO][2820] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" iface="eth0" netns="/var/run/netns/cni-5abe119f-7832-1461-1181-1490ff0147e1" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.976 [INFO][2820] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" iface="eth0" netns="/var/run/netns/cni-5abe119f-7832-1461-1181-1490ff0147e1" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.976 [INFO][2820] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" iface="eth0" netns="/var/run/netns/cni-5abe119f-7832-1461-1181-1490ff0147e1" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.976 [INFO][2820] k8s.go 583: Releasing IP address(es) ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:50.976 [INFO][2820] utils.go 196: Calico CNI releasing IP address ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:51.037 [INFO][2841] ipam_plugin.go 416: Releasing address using handleID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.055243 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:51.055243 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:51.051 [WARNING][2841] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:51.051 [INFO][2841] ipam_plugin.go 444: Releasing address using workloadID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.055243 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:51.055243 env[1110]: 2023-10-02 19:26:51.054 [INFO][2820] k8s.go 589: Teardown processing complete. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:26:51.057318 env[1110]: time="2023-10-02T19:26:51.056975415Z" level=info msg="TearDown network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" successfully" Oct 2 19:26:51.057318 env[1110]: time="2023-10-02T19:26:51.057008858Z" level=info msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" returns successfully" Oct 2 19:26:51.056538 systemd[1]: run-netns-cni\x2d5abe119f\x2d7832\x2d1461\x2d1181\x2d1490ff0147e1.mount: Deactivated successfully. Oct 2 19:26:51.057648 env[1110]: time="2023-10-02T19:26:51.057612445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-75kzt,Uid:b0822001-b43f-4855-b401-678c43b136af,Namespace:calico-system,Attempt:1,}" Oct 2 19:26:51.181434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:26:51.181631 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali49ab8b9563e: link becomes ready Oct 2 19:26:51.188647 systemd-networkd[1011]: cali49ab8b9563e: Link UP Oct 2 19:26:51.188654 systemd-networkd[1011]: cali49ab8b9563e: Gained carrier Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.094 [INFO][2855] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0 nginx-deployment-6d5f899847- default d465276a-936e-4514-bd15-fe7cf64b503d 915 0 2023-10-02 19:26:37 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 nginx-deployment-6d5f899847-nztm7 eth0 default [] [] [kns.default ksa.default.default] cali49ab8b9563e [] []}} ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.094 [INFO][2855] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.140 [INFO][2884] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.158 [INFO][2884] ipam_plugin.go 269: Auto assigning IP ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c98b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"nginx-deployment-6d5f899847-nztm7", "timestamp":"2023-10-02 19:26:51.140043863 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:26:51.197907 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:51.197907 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.159 [INFO][2884] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.160 [INFO][2884] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.163 [INFO][2884] ipam.go 372: Looking up existing affinities for host host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.166 [INFO][2884] ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.167 [INFO][2884] ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.169 [INFO][2884] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.169 [INFO][2884] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.170 [INFO][2884] ipam.go 1682: Creating new handle: k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3 Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.173 [INFO][2884] ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.177 [INFO][2884] ipam.go 1216: Successfully claimed IPs: [192.168.37.3/26] block=192.168.37.0/26 handle="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.177 [INFO][2884] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.3/26] handle="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" host="10.0.0.19" Oct 2 19:26:51.197907 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:51.197907 env[1110]: 2023-10-02 19:26:51.177 [INFO][2884] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.37.3/26] IPv6=[] ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.178 [INFO][2855] k8s.go 383: Populated endpoint ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d465276a-936e-4514-bd15-fe7cf64b503d", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"nginx-deployment-6d5f899847-nztm7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49ab8b9563e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.178 [INFO][2855] k8s.go 384: Calico CNI using IPs: [192.168.37.3/32] ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.178 [INFO][2855] dataplane_linux.go 68: Setting the host side veth name to cali49ab8b9563e ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.181 [INFO][2855] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.189 [INFO][2855] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d465276a-936e-4514-bd15-fe7cf64b503d", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3", Pod:"nginx-deployment-6d5f899847-nztm7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49ab8b9563e", MAC:"42:df:9b:9c:4b:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:51.198511 env[1110]: 2023-10-02 19:26:51.193 [INFO][2855] k8s.go 489: Wrote updated endpoint to datastore ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Namespace="default" Pod="nginx-deployment-6d5f899847-nztm7" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:26:51.219286 kernel: kauditd_printk_skb: 382 callbacks suppressed Oct 2 19:26:51.219402 kernel: audit: type=1325 audit(1696274811.211:765): table=filter:72 family=2 entries=44 op=nft_register_chain pid=2918 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:51.219434 kernel: audit: type=1300 audit(1696274811.211:765): arch=c000003e syscall=46 success=yes exit=22252 a0=3 a1=7ffda21fa1e0 a2=0 a3=7ffda21fa1cc items=0 ppid=2373 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.219461 kernel: audit: type=1327 audit(1696274811.211:765): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:51.211000 audit[2918]: NETFILTER_CFG table=filter:72 family=2 entries=44 op=nft_register_chain pid=2918 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:51.211000 audit[2918]: SYSCALL arch=c000003e syscall=46 success=yes exit=22252 a0=3 a1=7ffda21fa1e0 a2=0 a3=7ffda21fa1cc items=0 ppid=2373 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.211000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:51.219698 env[1110]: time="2023-10-02T19:26:51.217044115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:51.219698 env[1110]: time="2023-10-02T19:26:51.217075484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:51.219698 env[1110]: time="2023-10-02T19:26:51.217084712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:51.219698 env[1110]: time="2023-10-02T19:26:51.218426180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3 pid=2925 runtime=io.containerd.runc.v2 Oct 2 19:26:51.225869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali82f14e091e7: link becomes ready Oct 2 19:26:51.230778 systemd-networkd[1011]: cali82f14e091e7: Link UP Oct 2 19:26:51.230806 systemd-networkd[1011]: cali82f14e091e7: Gained carrier Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.127 [INFO][2871] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-csi--node--driver--75kzt-eth0 csi-node-driver- calico-system b0822001-b43f-4855-b401-678c43b136af 914 0 2023-10-02 19:26:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6b49688c47 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 csi-node-driver-75kzt eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali82f14e091e7 [] []}} ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.127 [INFO][2871] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.185 [INFO][2893] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" HandleID="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.195 [INFO][2893] ipam_plugin.go 269: Auto assigning IP ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" HandleID="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.19", "pod":"csi-node-driver-75kzt", "timestamp":"2023-10-02 19:26:51.185171976 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:26:51.241164 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:51.241164 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.195 [INFO][2893] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.197 [INFO][2893] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.200 [INFO][2893] ipam.go 372: Looking up existing affinities for host host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.204 [INFO][2893] ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.205 [INFO][2893] ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.207 [INFO][2893] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.207 [INFO][2893] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.208 [INFO][2893] ipam.go 1682: Creating new handle: k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0 Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.211 [INFO][2893] ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.220 [INFO][2893] ipam.go 1216: Successfully claimed IPs: [192.168.37.4/26] block=192.168.37.0/26 handle="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.220 [INFO][2893] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.4/26] handle="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" host="10.0.0.19" Oct 2 19:26:51.241164 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:51.241164 env[1110]: 2023-10-02 19:26:51.220 [INFO][2893] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.37.4/26] IPv6=[] ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" HandleID="k8s-pod-network.9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.221 [INFO][2871] k8s.go 383: Populated endpoint ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--75kzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0822001-b43f-4855-b401-678c43b136af", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"csi-node-driver-75kzt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82f14e091e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.221 [INFO][2871] k8s.go 384: Calico CNI using IPs: [192.168.37.4/32] ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.221 [INFO][2871] dataplane_linux.go 68: Setting the host side veth name to cali82f14e091e7 ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.225 [INFO][2871] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.231 [INFO][2871] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--75kzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0822001-b43f-4855-b401-678c43b136af", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0", Pod:"csi-node-driver-75kzt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82f14e091e7", MAC:"da:0f:97:e4:5d:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:51.241728 env[1110]: 2023-10-02 19:26:51.237 [INFO][2871] k8s.go 489: Wrote updated endpoint to datastore ContainerID="9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0" Namespace="calico-system" Pod="csi-node-driver-75kzt" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:26:51.246987 systemd[1]: Started cri-containerd-27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3.scope. Oct 2 19:26:51.262826 kernel: audit: type=1325 audit(1696274811.251:766): table=filter:73 family=2 entries=48 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:51.262963 kernel: audit: type=1300 audit(1696274811.251:766): arch=c000003e syscall=46 success=yes exit=23548 a0=3 a1=7fff6345f1e0 a2=0 a3=7fff6345f1cc items=0 ppid=2373 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.262981 kernel: audit: type=1327 audit(1696274811.251:766): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:51.262996 kernel: audit: type=1400 audit(1696274811.257:767): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.263028 kernel: audit: type=1400 audit(1696274811.257:768): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.251000 audit[2956]: NETFILTER_CFG table=filter:73 family=2 entries=48 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:51.251000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=23548 a0=3 a1=7fff6345f1e0 a2=0 a3=7fff6345f1cc items=0 ppid=2373 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.251000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.266806 kernel: audit: type=1400 audit(1696274811.257:769): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.266862 kernel: audit: type=1400 audit(1696274811.257:770): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit: BPF prog-id=104 op=LOAD Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=2925 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237663934313366333334343661656134613837653030643434353365 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=2925 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237663934313366333334343661656134613837653030643434353365 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.258000 audit: BPF prog-id=105 op=LOAD Oct 2 19:26:51.258000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001179d8 a2=78 a3=c0002b0a10 items=0 ppid=2925 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237663934313366333334343661656134613837653030643434353365 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.260000 audit: BPF prog-id=106 op=LOAD Oct 2 19:26:51.260000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000117770 a2=78 a3=c0002b0a58 items=0 ppid=2925 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237663934313366333334343661656134613837653030643434353365 Oct 2 19:26:51.262000 audit: BPF prog-id=106 op=UNLOAD Oct 2 19:26:51.262000 audit: BPF prog-id=105 op=UNLOAD Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { perfmon } for pid=2935 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit[2935]: AVC avc: denied { bpf } for pid=2935 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.262000 audit: BPF prog-id=107 op=LOAD Oct 2 19:26:51.262000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000117c30 a2=78 a3=c0002b0e68 items=0 ppid=2925 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237663934313366333334343661656134613837653030643434353365 Oct 2 19:26:51.269121 systemd-resolved[1056]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:26:51.276204 env[1110]: time="2023-10-02T19:26:51.276138769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:51.276204 env[1110]: time="2023-10-02T19:26:51.276170459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:51.276204 env[1110]: time="2023-10-02T19:26:51.276186129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:51.276704 env[1110]: time="2023-10-02T19:26:51.276275639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0 pid=2974 runtime=io.containerd.runc.v2 Oct 2 19:26:51.291503 systemd[1]: Started cri-containerd-9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0.scope. Oct 2 19:26:51.294768 env[1110]: time="2023-10-02T19:26:51.294725349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nztm7,Uid:d465276a-936e-4514-bd15-fe7cf64b503d,Namespace:default,Attempt:1,} returns sandbox id \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\"" Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.302000 audit: BPF prog-id=108 op=LOAD Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2974 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965633461653230323035366133643666663263653337396631333331 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=c items=0 ppid=2974 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965633461653230323035366133643666663263653337396631333331 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit: BPF prog-id=109 op=LOAD Oct 2 19:26:51.304000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c00018d360 items=0 ppid=2974 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965633461653230323035366133643666663263653337396631333331 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit: BPF prog-id=110 op=LOAD Oct 2 19:26:51.304000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c00018d3a8 items=0 ppid=2974 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965633461653230323035366133643666663263653337396631333331 Oct 2 19:26:51.304000 audit: BPF prog-id=110 op=UNLOAD Oct 2 19:26:51.304000 audit: BPF prog-id=109 op=UNLOAD Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { perfmon } for pid=2982 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit[2982]: AVC avc: denied { bpf } for pid=2982 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.304000 audit: BPF prog-id=111 op=LOAD Oct 2 19:26:51.304000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c00018d7b8 items=0 ppid=2974 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965633461653230323035366133643666663263653337396631333331 Oct 2 19:26:51.306057 systemd-resolved[1056]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:26:51.319255 env[1110]: time="2023-10-02T19:26:51.319214211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-75kzt,Uid:b0822001-b43f-4855-b401-678c43b136af,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0\"" Oct 2 19:26:51.400680 env[1110]: time="2023-10-02T19:26:51.400637725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:51.402697 env[1110]: time="2023-10-02T19:26:51.402628086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:51.404118 env[1110]: time="2023-10-02T19:26:51.404072821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:51.405299 env[1110]: time="2023-10-02T19:26:51.405261138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:51.405816 env[1110]: time="2023-10-02T19:26:51.405777530Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Oct 2 19:26:51.406332 env[1110]: time="2023-10-02T19:26:51.406289372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.25.0\"" Oct 2 19:26:51.407824 env[1110]: time="2023-10-02T19:26:51.407798699Z" level=info msg="CreateContainer within sandbox \"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 2 19:26:51.419349 env[1110]: time="2023-10-02T19:26:51.419293867Z" level=info msg="CreateContainer within sandbox \"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75bbbef46165ff3553bb12bd9bd8c015915f65252f4cf0ae23c5bd3019d0dcec\"" Oct 2 19:26:51.419924 env[1110]: time="2023-10-02T19:26:51.419880903Z" level=info msg="StartContainer for \"75bbbef46165ff3553bb12bd9bd8c015915f65252f4cf0ae23c5bd3019d0dcec\"" Oct 2 19:26:51.436601 systemd[1]: Started cri-containerd-75bbbef46165ff3553bb12bd9bd8c015915f65252f4cf0ae23c5bd3019d0dcec.scope. Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.448000 audit: BPF prog-id=112 op=LOAD Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2508 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735626262656634363136356666333535336262313262643962643863 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2508 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735626262656634363136356666333535336262313262643962643863 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit: BPF prog-id=113 op=LOAD Oct 2 19:26:51.449000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000187820 items=0 ppid=2508 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735626262656634363136356666333535336262313262643962643863 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit: BPF prog-id=114 op=LOAD Oct 2 19:26:51.449000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000187868 items=0 ppid=2508 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735626262656634363136356666333535336262313262643962643863 Oct 2 19:26:51.449000 audit: BPF prog-id=114 op=UNLOAD Oct 2 19:26:51.449000 audit: BPF prog-id=113 op=UNLOAD Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { perfmon } for pid=3020 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit[3020]: AVC avc: denied { bpf } for pid=3020 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:51.449000 audit: BPF prog-id=115 op=LOAD Oct 2 19:26:51.449000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000187c78 items=0 ppid=2508 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:51.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735626262656634363136356666333535336262313262643962643863 Oct 2 19:26:51.467376 env[1110]: time="2023-10-02T19:26:51.467339822Z" level=info msg="StartContainer for \"75bbbef46165ff3553bb12bd9bd8c015915f65252f4cf0ae23c5bd3019d0dcec\" returns successfully" Oct 2 19:26:51.843816 kubelet[1416]: E1002 19:26:51.843747 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:51.914102 env[1110]: time="2023-10-02T19:26:51.914062571Z" level=info msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.955 [INFO][3068] k8s.go 576: Cleaning up netns ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.955 [INFO][3068] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" iface="eth0" netns="/var/run/netns/cni-94745cd2-cc70-a9c8-a8f5-5fc8219215e3" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.955 [INFO][3068] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" iface="eth0" netns="/var/run/netns/cni-94745cd2-cc70-a9c8-a8f5-5fc8219215e3" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.958 [INFO][3068] dataplane_linux.go 562: Workload's veth was already gone. Nothing to do. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" iface="eth0" netns="/var/run/netns/cni-94745cd2-cc70-a9c8-a8f5-5fc8219215e3" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.958 [INFO][3068] k8s.go 583: Releasing IP address(es) ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.958 [INFO][3068] utils.go 196: Calico CNI releasing IP address ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.982 [INFO][3075] ipam_plugin.go 416: Releasing address using handleID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:51.992072 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:51.992072 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.988 [WARNING][3075] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.988 [INFO][3075] ipam_plugin.go 444: Releasing address using workloadID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:51.992072 env[1110]: time="2023-10-02T19:26:51Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:51.992072 env[1110]: 2023-10-02 19:26:51.990 [INFO][3068] k8s.go 589: Teardown processing complete. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:26:51.992519 env[1110]: time="2023-10-02T19:26:51.992282389Z" level=info msg="TearDown network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" successfully" Oct 2 19:26:51.992519 env[1110]: time="2023-10-02T19:26:51.992332684Z" level=info msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" returns successfully" Oct 2 19:26:51.992799 kubelet[1416]: E1002 19:26:51.992755 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:51.993630 env[1110]: time="2023-10-02T19:26:51.993586506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8c5qr,Uid:abf6e2c9-193c-4296-8247-02d6e5da6ae3,Namespace:kube-system,Attempt:1,}" Oct 2 19:26:51.993670 systemd[1]: run-netns-cni\x2d94745cd2\x2dcc70\x2da9c8\x2da8f5\x2d5fc8219215e3.mount: Deactivated successfully. Oct 2 19:26:52.098811 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali282aa86e6d8: link becomes ready Oct 2 19:26:52.103909 systemd-networkd[1011]: cali282aa86e6d8: Link UP Oct 2 19:26:52.103918 systemd-networkd[1011]: cali282aa86e6d8: Gained carrier Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.035 [INFO][3082] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0 coredns-5dd5756b68- kube-system abf6e2c9-193c-4296-8247-02d6e5da6ae3 937 0 2023-10-02 19:26:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.19 coredns-5dd5756b68-8c5qr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali282aa86e6d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.035 [INFO][3082] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.062 [INFO][3096] ipam_plugin.go 229: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" HandleID="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.072 [INFO][3096] ipam_plugin.go 269: Auto assigning IP ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" HandleID="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b7990), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.19", "pod":"coredns-5dd5756b68-8c5qr", "timestamp":"2023-10-02 19:26:52.062005162 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 2 19:26:52.111094 env[1110]: time="2023-10-02T19:26:52Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:26:52.111094 env[1110]: time="2023-10-02T19:26:52Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.072 [INFO][3096] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.074 [INFO][3096] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.081 [INFO][3096] ipam.go 372: Looking up existing affinities for host host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.084 [INFO][3096] ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.085 [INFO][3096] ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.087 [INFO][3096] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.087 [INFO][3096] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.088 [INFO][3096] ipam.go 1682: Creating new handle: k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77 Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.091 [INFO][3096] ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.095 [INFO][3096] ipam.go 1216: Successfully claimed IPs: [192.168.37.5/26] block=192.168.37.0/26 handle="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.095 [INFO][3096] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.5/26] handle="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" host="10.0.0.19" Oct 2 19:26:52.111094 env[1110]: time="2023-10-02T19:26:52Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:26:52.111094 env[1110]: 2023-10-02 19:26:52.095 [INFO][3096] ipam_plugin.go 287: Calico CNI IPAM assigned addresses IPv4=[192.168.37.5/26] IPv6=[] ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" HandleID="k8s-pod-network.52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.096 [INFO][3082] k8s.go 383: Populated endpoint ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"abf6e2c9-193c-4296-8247-02d6e5da6ae3", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"coredns-5dd5756b68-8c5qr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282aa86e6d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.096 [INFO][3082] k8s.go 384: Calico CNI using IPs: [192.168.37.5/32] ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.096 [INFO][3082] dataplane_linux.go 68: Setting the host side veth name to cali282aa86e6d8 ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.098 [INFO][3082] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.104 [INFO][3082] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"abf6e2c9-193c-4296-8247-02d6e5da6ae3", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77", Pod:"coredns-5dd5756b68-8c5qr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282aa86e6d8", MAC:"02:79:08:51:b1:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:26:52.111849 env[1110]: 2023-10-02 19:26:52.109 [INFO][3082] k8s.go 489: Wrote updated endpoint to datastore ContainerID="52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77" Namespace="kube-system" Pod="coredns-5dd5756b68-8c5qr" WorkloadEndpoint="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:26:52.123913 env[1110]: time="2023-10-02T19:26:52.123837853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:26:52.124086 env[1110]: time="2023-10-02T19:26:52.123913387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:26:52.124086 env[1110]: time="2023-10-02T19:26:52.123926822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:26:52.124657 env[1110]: time="2023-10-02T19:26:52.124583429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77 pid=3126 runtime=io.containerd.runc.v2 Oct 2 19:26:52.123000 audit[3132]: NETFILTER_CFG table=filter:74 family=2 entries=44 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:26:52.123000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=21924 a0=3 a1=7ffca451a240 a2=0 a3=7ffca451a22c items=0 ppid=2373 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.123000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:26:52.139214 systemd[1]: Started cri-containerd-52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77.scope. Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.149000 audit: BPF prog-id=116 op=LOAD Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014fc48 a2=10 a3=1c items=0 ppid=3126 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636364336533323235356330666533356330373962306161323263 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014f6b0 a2=3c a3=c items=0 ppid=3126 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636364336533323235356330666533356330373962306161323263 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit: BPF prog-id=117 op=LOAD Oct 2 19:26:52.150000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014f9d8 a2=78 a3=c0000247b0 items=0 ppid=3126 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636364336533323235356330666533356330373962306161323263 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit: BPF prog-id=118 op=LOAD Oct 2 19:26:52.150000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00014f770 a2=78 a3=c0000247f8 items=0 ppid=3126 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636364336533323235356330666533356330373962306161323263 Oct 2 19:26:52.150000 audit: BPF prog-id=118 op=UNLOAD Oct 2 19:26:52.150000 audit: BPF prog-id=117 op=UNLOAD Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.150000 audit: BPF prog-id=119 op=LOAD Oct 2 19:26:52.150000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014fc30 a2=78 a3=c000024c08 items=0 ppid=3126 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532636364336533323235356330666533356330373962306161323263 Oct 2 19:26:52.152024 systemd-resolved[1056]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 2 19:26:52.173912 env[1110]: time="2023-10-02T19:26:52.173868994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8c5qr,Uid:abf6e2c9-193c-4296-8247-02d6e5da6ae3,Namespace:kube-system,Attempt:1,} returns sandbox id \"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77\"" Oct 2 19:26:52.174684 kubelet[1416]: E1002 19:26:52.174661 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:52.176976 env[1110]: time="2023-10-02T19:26:52.176944050Z" level=info msg="CreateContainer within sandbox \"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 2 19:26:52.188996 env[1110]: time="2023-10-02T19:26:52.188949700Z" level=info msg="CreateContainer within sandbox \"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dab736a15b8770ba5c6325cb1569b85db3317f309a6048ebd6567db88a7cbadf\"" Oct 2 19:26:52.189530 env[1110]: time="2023-10-02T19:26:52.189503010Z" level=info msg="StartContainer for \"dab736a15b8770ba5c6325cb1569b85db3317f309a6048ebd6567db88a7cbadf\"" Oct 2 19:26:52.203512 systemd[1]: Started cri-containerd-dab736a15b8770ba5c6325cb1569b85db3317f309a6048ebd6567db88a7cbadf.scope. Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit: BPF prog-id=120 op=LOAD Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=3126 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623733366131356238373730626135633633323563623135363962 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=3126 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623733366131356238373730626135633633323563623135363962 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.213000 audit: BPF prog-id=121 op=LOAD Oct 2 19:26:52.213000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00037c0d0 items=0 ppid=3126 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623733366131356238373730626135633633323563623135363962 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit: BPF prog-id=122 op=LOAD Oct 2 19:26:52.214000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00037c118 items=0 ppid=3126 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623733366131356238373730626135633633323563623135363962 Oct 2 19:26:52.214000 audit: BPF prog-id=122 op=UNLOAD Oct 2 19:26:52.214000 audit: BPF prog-id=121 op=UNLOAD Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { perfmon } for pid=3167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit[3167]: AVC avc: denied { bpf } for pid=3167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:52.214000 audit: BPF prog-id=123 op=LOAD Oct 2 19:26:52.214000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00037c528 items=0 ppid=3126 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461623733366131356238373730626135633633323563623135363962 Oct 2 19:26:52.231153 env[1110]: time="2023-10-02T19:26:52.231107825Z" level=info msg="StartContainer for \"dab736a15b8770ba5c6325cb1569b85db3317f309a6048ebd6567db88a7cbadf\" returns successfully" Oct 2 19:26:52.283572 kubelet[1416]: E1002 19:26:52.283539 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:52.288625 kubelet[1416]: E1002 19:26:52.288606 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:52.291321 kubelet[1416]: I1002 19:26:52.291209 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kq6xj" podStartSLOduration=49.205846476 podCreationTimestamp="2023-10-02 19:26:00 +0000 UTC" firstStartedPulling="2023-10-02 19:26:48.320795667 +0000 UTC m=+33.854509678" lastFinishedPulling="2023-10-02 19:26:51.406111143 +0000 UTC m=+36.939825144" observedRunningTime="2023-10-02 19:26:52.290801327 +0000 UTC m=+37.824515338" watchObservedRunningTime="2023-10-02 19:26:52.291161942 +0000 UTC m=+37.824875953" Oct 2 19:26:52.300000 audit[3202]: NETFILTER_CFG table=filter:75 family=2 entries=14 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:52.300000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffda544d560 a2=0 a3=7ffda544d54c items=0 ppid=1630 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:52.300000 audit[3202]: NETFILTER_CFG table=nat:76 family=2 entries=14 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:52.300000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3300 a0=3 a1=7ffda544d560 a2=0 a3=31030 items=0 ppid=1630 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:52.309000 audit[3204]: NETFILTER_CFG table=filter:77 family=2 entries=11 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:52.309000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=2844 a0=3 a1=7fff3ecac920 a2=0 a3=7fff3ecac90c items=0 ppid=1630 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:52.312700 kubelet[1416]: I1002 19:26:52.312665 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8c5qr" podStartSLOduration=52.312620825 podCreationTimestamp="2023-10-02 19:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:26:52.312531555 +0000 UTC m=+37.846245566" watchObservedRunningTime="2023-10-02 19:26:52.312620825 +0000 UTC m=+37.846334826" Oct 2 19:26:52.311000 audit[3204]: NETFILTER_CFG table=nat:78 family=2 entries=35 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:52.311000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=13788 a0=3 a1=7fff3ecac920 a2=0 a3=7fff3ecac90c items=0 ppid=1630 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:52.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:52.522019 systemd-networkd[1011]: cali49ab8b9563e: Gained IPv6LL Oct 2 19:26:52.616855 update_engine[1102]: I1002 19:26:52.616763 1102 update_attempter.cc:505] Updating boot flags... Oct 2 19:26:52.844829 kubelet[1416]: E1002 19:26:52.844660 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:52.905994 systemd-networkd[1011]: cali82f14e091e7: Gained IPv6LL Oct 2 19:26:53.290720 kubelet[1416]: E1002 19:26:53.290670 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:53.290720 kubelet[1416]: E1002 19:26:53.290707 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:53.307000 audit[3223]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:53.307000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=2844 a0=3 a1=7ffeff5d8e80 a2=0 a3=7ffeff5d8e6c items=0 ppid=1630 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:53.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:53.311000 audit[3223]: NETFILTER_CFG table=nat:80 family=2 entries=56 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:26:53.311000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=19452 a0=3 a1=7ffeff5d8e80 a2=0 a3=7ffeff5d8e6c items=0 ppid=1630 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:53.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:26:53.738017 systemd-networkd[1011]: cali282aa86e6d8: Gained IPv6LL Oct 2 19:26:53.845128 kubelet[1416]: E1002 19:26:53.845073 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:54.292425 kubelet[1416]: E1002 19:26:54.292392 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:54.292425 kubelet[1416]: E1002 19:26:54.292393 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:54.819729 kubelet[1416]: E1002 19:26:54.819660 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:54.845989 kubelet[1416]: E1002 19:26:54.845936 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:55.019444 env[1110]: time="2023-10-02T19:26:55.019360546Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:55.021470 env[1110]: time="2023-10-02T19:26:55.021415839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:55.023404 env[1110]: time="2023-10-02T19:26:55.023352657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.25.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:55.025261 env[1110]: time="2023-10-02T19:26:55.025149441Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:b764feb1777655aabce5988324b69b412d23e087436ee2414dff893a158fcdef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:26:55.026183 env[1110]: time="2023-10-02T19:26:55.026114500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.25.0\" returns image reference \"sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78\"" Oct 2 19:26:55.027294 env[1110]: time="2023-10-02T19:26:55.027244259Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Oct 2 19:26:55.028307 env[1110]: time="2023-10-02T19:26:55.028279099Z" level=info msg="CreateContainer within sandbox \"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 2 19:26:55.039377 env[1110]: time="2023-10-02T19:26:55.039329170Z" level=info msg="CreateContainer within sandbox \"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4\"" Oct 2 19:26:55.039916 env[1110]: time="2023-10-02T19:26:55.039846490Z" level=info msg="StartContainer for \"9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4\"" Oct 2 19:26:55.058830 systemd[1]: Started cri-containerd-9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4.scope. Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.066000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit: BPF prog-id=124 op=LOAD Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2692 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:55.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964623730333962383764653362376539336466643962646137356439 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2692 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:55.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964623730333962383764653362376539336466643962646137356439 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit: BPF prog-id=125 op=LOAD Oct 2 19:26:55.067000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00037e0d0 items=0 ppid=2692 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:55.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964623730333962383764653362376539336466643962646137356439 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.067000 audit: BPF prog-id=126 op=LOAD Oct 2 19:26:55.067000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00037e118 items=0 ppid=2692 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:55.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964623730333962383764653362376539336466643962646137356439 Oct 2 19:26:55.068000 audit: BPF prog-id=126 op=UNLOAD Oct 2 19:26:55.068000 audit: BPF prog-id=125 op=UNLOAD Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { perfmon } for pid=3234 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit[3234]: AVC avc: denied { bpf } for pid=3234 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:26:55.068000 audit: BPF prog-id=127 op=LOAD Oct 2 19:26:55.068000 audit[3234]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00037e528 items=0 ppid=2692 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:55.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964623730333962383764653362376539336466643962646137356439 Oct 2 19:26:55.082746 env[1110]: time="2023-10-02T19:26:55.082686267Z" level=info msg="StartContainer for \"9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4\" returns successfully" Oct 2 19:26:55.335520 kubelet[1416]: I1002 19:26:55.335284 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" podStartSLOduration=39.516371852 podCreationTimestamp="2023-10-02 19:26:10 +0000 UTC" firstStartedPulling="2023-10-02 19:26:49.207723752 +0000 UTC m=+34.741437763" lastFinishedPulling="2023-10-02 19:26:55.026583727 +0000 UTC m=+40.560297739" observedRunningTime="2023-10-02 19:26:55.304232359 +0000 UTC m=+40.837946390" watchObservedRunningTime="2023-10-02 19:26:55.335231828 +0000 UTC m=+40.868945839" Oct 2 19:26:55.539004 kubelet[1416]: E1002 19:26:55.538972 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:26:55.846500 kubelet[1416]: E1002 19:26:55.846430 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:56.847620 kubelet[1416]: E1002 19:26:56.847562 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:57.848086 kubelet[1416]: E1002 19:26:57.848012 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:58.848254 kubelet[1416]: E1002 19:26:58.848182 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:26:59.007296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276997061.mount: Deactivated successfully. Oct 2 19:26:59.849310 kubelet[1416]: E1002 19:26:59.849255 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:00.102964 env[1110]: time="2023-10-02T19:27:00.102857629Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:00.104493 env[1110]: time="2023-10-02T19:27:00.104452342Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:22c2ef579d5668dbfa645a84c3a2e988885c114561e9a560a97b2d0ea6d6c988,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:00.106034 env[1110]: time="2023-10-02T19:27:00.105995016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:00.107376 env[1110]: time="2023-10-02T19:27:00.107351749Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:637f6b877b0a51c456b44ec74046864b5131a87cb1c4536f11170201073027cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:00.107956 env[1110]: time="2023-10-02T19:27:00.107926835Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22c2ef579d5668dbfa645a84c3a2e988885c114561e9a560a97b2d0ea6d6c988\"" Oct 2 19:27:00.108596 env[1110]: time="2023-10-02T19:27:00.108574037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:27:00.109231 env[1110]: time="2023-10-02T19:27:00.109206643Z" level=info msg="CreateContainer within sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Oct 2 19:27:00.119713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371991407.mount: Deactivated successfully. Oct 2 19:27:00.120805 env[1110]: time="2023-10-02T19:27:00.120759556Z" level=info msg="CreateContainer within sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\"" Oct 2 19:27:00.121217 env[1110]: time="2023-10-02T19:27:00.121187585Z" level=info msg="StartContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\"" Oct 2 19:27:00.136058 systemd[1]: Started cri-containerd-e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb.scope. Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146025 kernel: kauditd_printk_skb: 359 callbacks suppressed Oct 2 19:27:00.146107 kernel: audit: type=1400 audit(1696274820.143:882): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149509 kernel: audit: type=1400 audit(1696274820.143:883): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149579 kernel: audit: type=1400 audit(1696274820.143:884): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.153044 kernel: audit: type=1400 audit(1696274820.143:885): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.153083 kernel: audit: type=1400 audit(1696274820.143:886): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.154916 kernel: audit: type=1400 audit(1696274820.143:887): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.158467 kernel: audit: type=1400 audit(1696274820.143:888): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.160680 kernel: audit: type=1400 audit(1696274820.143:889): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.143000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.164592 kernel: audit: type=1400 audit(1696274820.143:890): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.164649 kernel: audit: type=1400 audit(1696274820.145:891): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.145000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.145000 audit: BPF prog-id=128 op=LOAD Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00019fc48 a2=10 a3=1c items=0 ppid=2925 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663835303935386263666332333239653835316562616362656638 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00019f6b0 a2=3c a3=8 items=0 ppid=2925 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663835303935386263666332333239653835316562616362656638 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.146000 audit: BPF prog-id=129 op=LOAD Oct 2 19:27:00.146000 audit[3347]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019f9d8 a2=78 a3=c000324bc0 items=0 ppid=2925 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663835303935386263666332333239653835316562616362656638 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.149000 audit: BPF prog-id=130 op=LOAD Oct 2 19:27:00.149000 audit[3347]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019f770 a2=78 a3=c000324c08 items=0 ppid=2925 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663835303935386263666332333239653835316562616362656638 Oct 2 19:27:00.151000 audit: BPF prog-id=130 op=UNLOAD Oct 2 19:27:00.151000 audit: BPF prog-id=129 op=UNLOAD Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { perfmon } for pid=3347 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit[3347]: AVC avc: denied { bpf } for pid=3347 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:00.151000 audit: BPF prog-id=131 op=LOAD Oct 2 19:27:00.151000 audit[3347]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019fc30 a2=78 a3=c000325018 items=0 ppid=2925 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537663835303935386263666332333239653835316562616362656638 Oct 2 19:27:00.171843 env[1110]: time="2023-10-02T19:27:00.171434722Z" level=info msg="StartContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" returns successfully" Oct 2 19:27:00.313745 kubelet[1416]: I1002 19:27:00.313716 1416 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-nztm7" podStartSLOduration=14.501154994 podCreationTimestamp="2023-10-02 19:26:37 +0000 UTC" firstStartedPulling="2023-10-02 19:26:51.295675153 +0000 UTC m=+36.829389164" lastFinishedPulling="2023-10-02 19:27:00.108200892 +0000 UTC m=+45.641914904" observedRunningTime="2023-10-02 19:27:00.313471719 +0000 UTC m=+45.847185730" watchObservedRunningTime="2023-10-02 19:27:00.313680734 +0000 UTC m=+45.847394745" Oct 2 19:27:00.391641 env[1110]: time="2023-10-02T19:27:00.391539249Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:00.392710 env[1110]: time="2023-10-02T19:27:00.392664875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:00.393125 kubelet[1416]: E1002 19:27:00.393082 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:00.393188 kubelet[1416]: E1002 19:27:00.393131 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:00.393325 kubelet[1416]: E1002 19:27:00.393301 1416 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:00.393975 env[1110]: time="2023-10-02T19:27:00.393952958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:27:00.651637 env[1110]: time="2023-10-02T19:27:00.651464878Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:00.652674 env[1110]: time="2023-10-02T19:27:00.652615823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:00.652953 kubelet[1416]: E1002 19:27:00.652929 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:00.653019 kubelet[1416]: E1002 19:27:00.652972 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:00.653105 kubelet[1416]: E1002 19:27:00.653070 1416 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:00.653220 kubelet[1416]: E1002 19:27:00.653126 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:00.850214 kubelet[1416]: E1002 19:27:00.850187 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:01.309481 kubelet[1416]: E1002 19:27:01.309453 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:01.850771 kubelet[1416]: E1002 19:27:01.850722 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:02.851466 kubelet[1416]: E1002 19:27:02.851378 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:03.852583 kubelet[1416]: E1002 19:27:03.852525 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.852810 kubelet[1416]: E1002 19:27:04.852723 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.928817 kubelet[1416]: I1002 19:27:04.928763 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:04.928817 kubelet[1416]: I1002 19:27:04.928813 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:04.930255 env[1110]: time="2023-10-02T19:27:04.930216545Z" level=info msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.963 [WARNING][3422] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--75kzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0822001-b43f-4855-b401-678c43b136af", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0", Pod:"csi-node-driver-75kzt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82f14e091e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.963 [INFO][3422] k8s.go 576: Cleaning up netns ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.964 [INFO][3422] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" iface="eth0" netns="" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.964 [INFO][3422] k8s.go 583: Releasing IP address(es) ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.964 [INFO][3422] utils.go 196: Calico CNI releasing IP address ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.981 [INFO][3430] ipam_plugin.go 416: Releasing address using handleID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:04.992755 env[1110]: time="2023-10-02T19:27:04Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:04.992755 env[1110]: time="2023-10-02T19:27:04Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.988 [WARNING][3430] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.989 [INFO][3430] ipam_plugin.go 444: Releasing address using workloadID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:04.992755 env[1110]: time="2023-10-02T19:27:04Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:04.992755 env[1110]: 2023-10-02 19:27:04.991 [INFO][3422] k8s.go 589: Teardown processing complete. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:04.993388 env[1110]: time="2023-10-02T19:27:04.992766358Z" level=info msg="TearDown network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" successfully" Oct 2 19:27:04.993388 env[1110]: time="2023-10-02T19:27:04.992815240Z" level=info msg="StopPodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" returns successfully" Oct 2 19:27:04.993506 env[1110]: time="2023-10-02T19:27:04.993477779Z" level=info msg="RemovePodSandbox for \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" Oct 2 19:27:04.993547 env[1110]: time="2023-10-02T19:27:04.993509610Z" level=info msg="Forcibly stopping sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\"" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.024 [WARNING][3454] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--75kzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0822001-b43f-4855-b401-678c43b136af", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6b49688c47", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"9ec4ae202056a3d6ff2ce379f1331c6c8970b804e3c3f8817484aba39399a3a0", Pod:"csi-node-driver-75kzt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82f14e091e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.024 [INFO][3454] k8s.go 576: Cleaning up netns ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.024 [INFO][3454] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" iface="eth0" netns="" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.024 [INFO][3454] k8s.go 583: Releasing IP address(es) ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.024 [INFO][3454] utils.go 196: Calico CNI releasing IP address ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.041 [INFO][3462] ipam_plugin.go 416: Releasing address using handleID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:05.051165 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.051165 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.047 [WARNING][3462] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.047 [INFO][3462] ipam_plugin.go 444: Releasing address using workloadID ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" HandleID="k8s-pod-network.01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Workload="10.0.0.19-k8s-csi--node--driver--75kzt-eth0" Oct 2 19:27:05.051165 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.051165 env[1110]: 2023-10-02 19:27:05.049 [INFO][3454] k8s.go 589: Teardown processing complete. ContainerID="01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2" Oct 2 19:27:05.051165 env[1110]: time="2023-10-02T19:27:05.051148879Z" level=info msg="TearDown network for sandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" successfully" Oct 2 19:27:05.397359 env[1110]: time="2023-10-02T19:27:05.397277573Z" level=info msg="RemovePodSandbox \"01634c251531bd8fce100a94c13e5889a59f716762aa1324862e3110846c99b2\" returns successfully" Oct 2 19:27:05.398027 env[1110]: time="2023-10-02T19:27:05.397980980Z" level=info msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.435 [WARNING][3485] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"76dde907-d81f-4af1-8608-00e5081994e4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4", Pod:"calico-kube-controllers-74b9887bb6-g8t2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899a2c2ae2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.435 [INFO][3485] k8s.go 576: Cleaning up netns ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.435 [INFO][3485] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" iface="eth0" netns="" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.435 [INFO][3485] k8s.go 583: Releasing IP address(es) ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.435 [INFO][3485] utils.go 196: Calico CNI releasing IP address ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.452 [INFO][3492] ipam_plugin.go 416: Releasing address using handleID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.462457 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.462457 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.458 [WARNING][3492] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.458 [INFO][3492] ipam_plugin.go 444: Releasing address using workloadID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.462457 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.462457 env[1110]: 2023-10-02 19:27:05.461 [INFO][3485] k8s.go 589: Teardown processing complete. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.462959 env[1110]: time="2023-10-02T19:27:05.462529188Z" level=info msg="TearDown network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" successfully" Oct 2 19:27:05.462959 env[1110]: time="2023-10-02T19:27:05.462561218Z" level=info msg="StopPodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" returns successfully" Oct 2 19:27:05.463179 env[1110]: time="2023-10-02T19:27:05.463143536Z" level=info msg="RemovePodSandbox for \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" Oct 2 19:27:05.463249 env[1110]: time="2023-10-02T19:27:05.463186828Z" level=info msg="Forcibly stopping sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\"" Oct 2 19:27:05.500667 systemd[1]: run-containerd-runc-k8s.io-a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3-runc.BWni6w.mount: Deactivated successfully. Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.501 [WARNING][3514] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0", GenerateName:"calico-kube-controllers-74b9887bb6-", Namespace:"calico-system", SelfLink:"", UID:"76dde907-d81f-4af1-8608-00e5081994e4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b9887bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"8a8797c0c3b5e486d3ab55e855a0f67444cb914a49caaa15eb6ba472a7b643f4", Pod:"calico-kube-controllers-74b9887bb6-g8t2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899a2c2ae2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.502 [INFO][3514] k8s.go 576: Cleaning up netns ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.502 [INFO][3514] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" iface="eth0" netns="" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.502 [INFO][3514] k8s.go 583: Releasing IP address(es) ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.502 [INFO][3514] utils.go 196: Calico CNI releasing IP address ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.523 [INFO][3529] ipam_plugin.go 416: Releasing address using handleID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.533579 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.533579 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.529 [WARNING][3529] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.529 [INFO][3529] ipam_plugin.go 444: Releasing address using workloadID ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" HandleID="k8s-pod-network.f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Workload="10.0.0.19-k8s-calico--kube--controllers--74b9887bb6--g8t2d-eth0" Oct 2 19:27:05.533579 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.533579 env[1110]: 2023-10-02 19:27:05.531 [INFO][3514] k8s.go 589: Teardown processing complete. ContainerID="f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b" Oct 2 19:27:05.534075 env[1110]: time="2023-10-02T19:27:05.533613139Z" level=info msg="TearDown network for sandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" successfully" Oct 2 19:27:05.536891 env[1110]: time="2023-10-02T19:27:05.536861803Z" level=info msg="RemovePodSandbox \"f5a5503a721bfc043ce77da13ffc23517206f3fafa580cd02d0425363528790b\" returns successfully" Oct 2 19:27:05.537545 env[1110]: time="2023-10-02T19:27:05.537509895Z" level=info msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.573 [WARNING][3563] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d465276a-936e-4514-bd15-fe7cf64b503d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3", Pod:"nginx-deployment-6d5f899847-nztm7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49ab8b9563e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.573 [INFO][3563] k8s.go 576: Cleaning up netns ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.573 [INFO][3563] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" iface="eth0" netns="" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.573 [INFO][3563] k8s.go 583: Releasing IP address(es) ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.573 [INFO][3563] utils.go 196: Calico CNI releasing IP address ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.591 [INFO][3572] ipam_plugin.go 416: Releasing address using handleID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.601976 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.601976 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.598 [WARNING][3572] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.598 [INFO][3572] ipam_plugin.go 444: Releasing address using workloadID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.601976 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.601976 env[1110]: 2023-10-02 19:27:05.600 [INFO][3563] k8s.go 589: Teardown processing complete. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.602456 env[1110]: time="2023-10-02T19:27:05.602018729Z" level=info msg="TearDown network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" successfully" Oct 2 19:27:05.602456 env[1110]: time="2023-10-02T19:27:05.602054016Z" level=info msg="StopPodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" returns successfully" Oct 2 19:27:05.602678 env[1110]: time="2023-10-02T19:27:05.602636905Z" level=info msg="RemovePodSandbox for \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" Oct 2 19:27:05.602750 env[1110]: time="2023-10-02T19:27:05.602682480Z" level=info msg="Forcibly stopping sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\"" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.636 [WARNING][3596] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"d465276a-936e-4514-bd15-fe7cf64b503d", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3", Pod:"nginx-deployment-6d5f899847-nztm7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49ab8b9563e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.636 [INFO][3596] k8s.go 576: Cleaning up netns ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.636 [INFO][3596] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" iface="eth0" netns="" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.636 [INFO][3596] k8s.go 583: Releasing IP address(es) ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.636 [INFO][3596] utils.go 196: Calico CNI releasing IP address ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.654 [INFO][3603] ipam_plugin.go 416: Releasing address using handleID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.665445 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.665445 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.661 [WARNING][3603] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.661 [INFO][3603] ipam_plugin.go 444: Releasing address using workloadID ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" HandleID="k8s-pod-network.3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:05.665445 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.665445 env[1110]: 2023-10-02 19:27:05.664 [INFO][3596] k8s.go 589: Teardown processing complete. ContainerID="3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b" Oct 2 19:27:05.665907 env[1110]: time="2023-10-02T19:27:05.665421057Z" level=info msg="TearDown network for sandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" successfully" Oct 2 19:27:05.668921 env[1110]: time="2023-10-02T19:27:05.668861784Z" level=info msg="RemovePodSandbox \"3d8b28d138921914ef6ca81b09cf3b87e4ccefe81e3a83a7fc46d11552c1887b\" returns successfully" Oct 2 19:27:05.669502 env[1110]: time="2023-10-02T19:27:05.669453079Z" level=info msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.704 [WARNING][3626] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"abf6e2c9-193c-4296-8247-02d6e5da6ae3", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77", Pod:"coredns-5dd5756b68-8c5qr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282aa86e6d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.704 [INFO][3626] k8s.go 576: Cleaning up netns ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.704 [INFO][3626] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" iface="eth0" netns="" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.704 [INFO][3626] k8s.go 583: Releasing IP address(es) ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.704 [INFO][3626] utils.go 196: Calico CNI releasing IP address ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.720 [INFO][3633] ipam_plugin.go 416: Releasing address using handleID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.730849 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.730849 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.726 [WARNING][3633] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.726 [INFO][3633] ipam_plugin.go 444: Releasing address using workloadID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.730849 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.730849 env[1110]: 2023-10-02 19:27:05.729 [INFO][3626] k8s.go 589: Teardown processing complete. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.730849 env[1110]: time="2023-10-02T19:27:05.730783311Z" level=info msg="TearDown network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" successfully" Oct 2 19:27:05.730849 env[1110]: time="2023-10-02T19:27:05.730828025Z" level=info msg="StopPodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" returns successfully" Oct 2 19:27:05.731510 env[1110]: time="2023-10-02T19:27:05.731334250Z" level=info msg="RemovePodSandbox for \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" Oct 2 19:27:05.731510 env[1110]: time="2023-10-02T19:27:05.731369115Z" level=info msg="Forcibly stopping sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\"" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.761 [WARNING][3656] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"abf6e2c9-193c-4296-8247-02d6e5da6ae3", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"52ccd3e32255c0fe35c079b0aa22c1c6dac02caae3ae876a14d7c1854a8acd77", Pod:"coredns-5dd5756b68-8c5qr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282aa86e6d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.762 [INFO][3656] k8s.go 576: Cleaning up netns ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.762 [INFO][3656] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" iface="eth0" netns="" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.762 [INFO][3656] k8s.go 583: Releasing IP address(es) ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.762 [INFO][3656] utils.go 196: Calico CNI releasing IP address ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.777 [INFO][3663] ipam_plugin.go 416: Releasing address using handleID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.786922 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.786922 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.783 [WARNING][3663] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.783 [INFO][3663] ipam_plugin.go 444: Releasing address using workloadID ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" HandleID="k8s-pod-network.7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Workload="10.0.0.19-k8s-coredns--5dd5756b68--8c5qr-eth0" Oct 2 19:27:05.786922 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.786922 env[1110]: 2023-10-02 19:27:05.785 [INFO][3656] k8s.go 589: Teardown processing complete. ContainerID="7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479" Oct 2 19:27:05.787434 env[1110]: time="2023-10-02T19:27:05.786915554Z" level=info msg="TearDown network for sandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" successfully" Oct 2 19:27:05.791988 env[1110]: time="2023-10-02T19:27:05.791956687Z" level=info msg="RemovePodSandbox \"7008cab79eafa03153d225a785f02f9a214c750abc5b38666a63f6fc86cc3479\" returns successfully" Oct 2 19:27:05.792580 env[1110]: time="2023-10-02T19:27:05.792538144Z" level=info msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.823 [WARNING][3685] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0ac60edc-a9a5-4566-a663-7a49486a549a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c", Pod:"coredns-5dd5756b68-kq6xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283fb31178f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.823 [INFO][3685] k8s.go 576: Cleaning up netns ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.823 [INFO][3685] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" iface="eth0" netns="" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.823 [INFO][3685] k8s.go 583: Releasing IP address(es) ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.823 [INFO][3685] utils.go 196: Calico CNI releasing IP address ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.841 [INFO][3692] ipam_plugin.go 416: Releasing address using handleID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.851358 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.851358 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.847 [WARNING][3692] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.847 [INFO][3692] ipam_plugin.go 444: Releasing address using workloadID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.851358 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.851358 env[1110]: 2023-10-02 19:27:05.850 [INFO][3685] k8s.go 589: Teardown processing complete. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.851836 env[1110]: time="2023-10-02T19:27:05.851393690Z" level=info msg="TearDown network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" successfully" Oct 2 19:27:05.851836 env[1110]: time="2023-10-02T19:27:05.851430831Z" level=info msg="StopPodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" returns successfully" Oct 2 19:27:05.852093 env[1110]: time="2023-10-02T19:27:05.852059495Z" level=info msg="RemovePodSandbox for \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" Oct 2 19:27:05.852148 env[1110]: time="2023-10-02T19:27:05.852101043Z" level=info msg="Forcibly stopping sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\"" Oct 2 19:27:05.853243 kubelet[1416]: E1002 19:27:05.853187 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.883 [WARNING][3717] k8s.go 540: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0ac60edc-a9a5-4566-a663-7a49486a549a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2023, time.October, 2, 19, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"24317d0a4c87885596a0e21205fe60ae664335d11bff4250e737053a3529933c", Pod:"coredns-5dd5756b68-kq6xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283fb31178f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.883 [INFO][3717] k8s.go 576: Cleaning up netns ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.884 [INFO][3717] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" iface="eth0" netns="" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.884 [INFO][3717] k8s.go 583: Releasing IP address(es) ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.884 [INFO][3717] utils.go 196: Calico CNI releasing IP address ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.902 [INFO][3725] ipam_plugin.go 416: Releasing address using handleID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.913636 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:05.913636 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.909 [WARNING][3725] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.909 [INFO][3725] ipam_plugin.go 444: Releasing address using workloadID ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" HandleID="k8s-pod-network.6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Workload="10.0.0.19-k8s-coredns--5dd5756b68--kq6xj-eth0" Oct 2 19:27:05.913636 env[1110]: time="2023-10-02T19:27:05Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:05.913636 env[1110]: 2023-10-02 19:27:05.912 [INFO][3717] k8s.go 589: Teardown processing complete. ContainerID="6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c" Oct 2 19:27:05.914243 env[1110]: time="2023-10-02T19:27:05.913645550Z" level=info msg="TearDown network for sandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" successfully" Oct 2 19:27:05.916599 env[1110]: time="2023-10-02T19:27:05.916505972Z" level=info msg="RemovePodSandbox \"6eaebf56cf2c9ebc84affdb11fa19dfd3e30ba1444d77502d4cbfd552484719c\" returns successfully" Oct 2 19:27:05.917242 kubelet[1416]: I1002 19:27:05.917214 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:05.931036 kubelet[1416]: I1002 19:27:05.931010 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:05.931129 kubelet[1416]: I1002 19:27:05.931107 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","tigera-operator/tigera-operator-8547bd6cc6-zx7vw","default/nginx-deployment-6d5f899847-nztm7","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-8c5qr","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:05.931166 kubelet[1416]: E1002 19:27:05.931148 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:05.931701 env[1110]: time="2023-10-02T19:27:05.931656775Z" level=info msg="StopContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" with timeout 30 (s)" Oct 2 19:27:05.932090 env[1110]: time="2023-10-02T19:27:05.932051990Z" level=info msg="Stop container \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" with signal terminated" Oct 2 19:27:05.939769 systemd[1]: cri-containerd-9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505.scope: Deactivated successfully. Oct 2 19:27:05.938000 audit: BPF prog-id=100 op=UNLOAD Oct 2 19:27:05.940850 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 2 19:27:05.940918 kernel: audit: type=1334 audit(1696274825.938:900): prog-id=100 op=UNLOAD Oct 2 19:27:05.944000 audit: BPF prog-id=103 op=UNLOAD Oct 2 19:27:05.947808 kernel: audit: type=1334 audit(1696274825.944:901): prog-id=103 op=UNLOAD Oct 2 19:27:05.956547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505-rootfs.mount: Deactivated successfully. Oct 2 19:27:06.100609 env[1110]: time="2023-10-02T19:27:06.100547514Z" level=info msg="shim disconnected" id=9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505 Oct 2 19:27:06.100609 env[1110]: time="2023-10-02T19:27:06.100599683Z" level=warning msg="cleaning up after shim disconnected" id=9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505 namespace=k8s.io Oct 2 19:27:06.100609 env[1110]: time="2023-10-02T19:27:06.100608138Z" level=info msg="cleaning up dead shim" Oct 2 19:27:06.107363 env[1110]: time="2023-10-02T19:27:06.107299058Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3752 runtime=io.containerd.runc.v2\n" Oct 2 19:27:06.110029 env[1110]: time="2023-10-02T19:27:06.109990340Z" level=info msg="StopContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" returns successfully" Oct 2 19:27:06.110632 env[1110]: time="2023-10-02T19:27:06.110610119Z" level=info msg="StopPodSandbox for \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\"" Oct 2 19:27:06.110687 env[1110]: time="2023-10-02T19:27:06.110661825Z" level=info msg="Container to stop \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:27:06.112205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad-shm.mount: Deactivated successfully. Oct 2 19:27:06.116669 systemd[1]: cri-containerd-388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad.scope: Deactivated successfully. Oct 2 19:27:06.115000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:27:06.118805 kernel: audit: type=1334 audit(1696274826.115:902): prog-id=84 op=UNLOAD Oct 2 19:27:06.119000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:27:06.121843 kernel: audit: type=1334 audit(1696274826.119:903): prog-id=87 op=UNLOAD Oct 2 19:27:06.137444 env[1110]: time="2023-10-02T19:27:06.137390450Z" level=info msg="shim disconnected" id=388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad Oct 2 19:27:06.137444 env[1110]: time="2023-10-02T19:27:06.137435504Z" level=warning msg="cleaning up after shim disconnected" id=388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad namespace=k8s.io Oct 2 19:27:06.137444 env[1110]: time="2023-10-02T19:27:06.137445072Z" level=info msg="cleaning up dead shim" Oct 2 19:27:06.144403 env[1110]: time="2023-10-02T19:27:06.144342261Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3783 runtime=io.containerd.runc.v2\n" Oct 2 19:27:06.144696 env[1110]: time="2023-10-02T19:27:06.144667043Z" level=info msg="TearDown network for sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" successfully" Oct 2 19:27:06.144696 env[1110]: time="2023-10-02T19:27:06.144692592Z" level=info msg="StopPodSandbox for \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" returns successfully" Oct 2 19:27:06.149159 kubelet[1416]: I1002 19:27:06.149131 1416 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-8547bd6cc6-zx7vw" Oct 2 19:27:06.149159 kubelet[1416]: I1002 19:27:06.149157 1416 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-8547bd6cc6-zx7vw"] Oct 2 19:27:06.161581 kubelet[1416]: I1002 19:27:06.161559 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bsfsq" nodeCondition=["DiskPressure"] Oct 2 19:27:06.174829 kubelet[1416]: I1002 19:27:06.174164 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dxxbn" nodeCondition=["DiskPressure"] Oct 2 19:27:06.188171 kubelet[1416]: I1002 19:27:06.188151 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hfrst" nodeCondition=["DiskPressure"] Oct 2 19:27:06.200347 kubelet[1416]: I1002 19:27:06.200330 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-59csp" nodeCondition=["DiskPressure"] Oct 2 19:27:06.211399 kubelet[1416]: I1002 19:27:06.211385 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rw4kk" nodeCondition=["DiskPressure"] Oct 2 19:27:06.223547 kubelet[1416]: I1002 19:27:06.223526 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vjtz9" nodeCondition=["DiskPressure"] Oct 2 19:27:06.232877 kubelet[1416]: I1002 19:27:06.232859 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcqgn\" (UniqueName: \"kubernetes.io/projected/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-kube-api-access-bcqgn\") pod \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\" (UID: \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\") " Oct 2 19:27:06.232950 kubelet[1416]: I1002 19:27:06.232889 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-var-lib-calico\") pod \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\" (UID: \"cdcab79d-5ddb-4900-9b9c-6f6ae31bf773\") " Oct 2 19:27:06.232950 kubelet[1416]: I1002 19:27:06.232944 1416 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "cdcab79d-5ddb-4900-9b9c-6f6ae31bf773" (UID: "cdcab79d-5ddb-4900-9b9c-6f6ae31bf773"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:27:06.235651 kubelet[1416]: I1002 19:27:06.235626 1416 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-kube-api-access-bcqgn" (OuterVolumeSpecName: "kube-api-access-bcqgn") pod "cdcab79d-5ddb-4900-9b9c-6f6ae31bf773" (UID: "cdcab79d-5ddb-4900-9b9c-6f6ae31bf773"). InnerVolumeSpecName "kube-api-access-bcqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:27:06.236078 kubelet[1416]: I1002 19:27:06.236054 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-468tv" nodeCondition=["DiskPressure"] Oct 2 19:27:06.252722 kubelet[1416]: I1002 19:27:06.252701 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cwgll" nodeCondition=["DiskPressure"] Oct 2 19:27:06.313613 kubelet[1416]: I1002 19:27:06.313573 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nn6lp" nodeCondition=["DiskPressure"] Oct 2 19:27:06.321807 kubelet[1416]: I1002 19:27:06.321769 1416 scope.go:117] "RemoveContainer" containerID="9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505" Oct 2 19:27:06.322916 env[1110]: time="2023-10-02T19:27:06.322874216Z" level=info msg="RemoveContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\"" Oct 2 19:27:06.325389 systemd[1]: Removed slice kubepods-besteffort-podcdcab79d_5ddb_4900_9b9c_6f6ae31bf773.slice. Oct 2 19:27:06.326390 env[1110]: time="2023-10-02T19:27:06.326359575Z" level=info msg="RemoveContainer for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" returns successfully" Oct 2 19:27:06.326585 kubelet[1416]: I1002 19:27:06.326565 1416 scope.go:117] "RemoveContainer" containerID="9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505" Oct 2 19:27:06.326853 env[1110]: time="2023-10-02T19:27:06.326748868Z" level=error msg="ContainerStatus for \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\": not found" Oct 2 19:27:06.327008 kubelet[1416]: E1002 19:27:06.326983 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\": not found" containerID="9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505" Oct 2 19:27:06.327091 kubelet[1416]: I1002 19:27:06.327035 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505"} err="failed to get container status \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\": rpc error: code = NotFound desc = an error occurred when try to find container \"9590730389e6f7d37e571800e92ca70043f1318640807e9d222a766999597505\": not found" Oct 2 19:27:06.333812 kubelet[1416]: I1002 19:27:06.333779 1416 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bcqgn\" (UniqueName: \"kubernetes.io/projected/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-kube-api-access-bcqgn\") on node \"10.0.0.19\" DevicePath \"\"" Oct 2 19:27:06.333812 kubelet[1416]: I1002 19:27:06.333811 1416 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cdcab79d-5ddb-4900-9b9c-6f6ae31bf773-var-lib-calico\") on node \"10.0.0.19\" DevicePath \"\"" Oct 2 19:27:06.463648 kubelet[1416]: I1002 19:27:06.463539 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sv6pp" nodeCondition=["DiskPressure"] Oct 2 19:27:06.495182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad-rootfs.mount: Deactivated successfully. Oct 2 19:27:06.495282 systemd[1]: var-lib-kubelet-pods-cdcab79d\x2d5ddb\x2d4900\x2d9b9c\x2d6f6ae31bf773-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbcqgn.mount: Deactivated successfully. Oct 2 19:27:06.613893 kubelet[1416]: I1002 19:27:06.613848 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9wfrc" nodeCondition=["DiskPressure"] Oct 2 19:27:06.765245 kubelet[1416]: I1002 19:27:06.765112 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k7hvr" nodeCondition=["DiskPressure"] Oct 2 19:27:06.853965 kubelet[1416]: E1002 19:27:06.853901 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:06.915656 kubelet[1416]: I1002 19:27:06.915607 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z2b5p" nodeCondition=["DiskPressure"] Oct 2 19:27:07.012625 kubelet[1416]: I1002 19:27:07.012581 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bwbvt" nodeCondition=["DiskPressure"] Oct 2 19:27:07.149257 kubelet[1416]: I1002 19:27:07.149215 1416 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-8547bd6cc6-zx7vw"] Oct 2 19:27:07.158858 kubelet[1416]: I1002 19:27:07.158821 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:07.158858 kubelet[1416]: I1002 19:27:07.158854 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:07.159914 env[1110]: time="2023-10-02T19:27:07.159866138Z" level=info msg="StopPodSandbox for \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\"" Oct 2 19:27:07.160218 env[1110]: time="2023-10-02T19:27:07.159943613Z" level=info msg="TearDown network for sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" successfully" Oct 2 19:27:07.160218 env[1110]: time="2023-10-02T19:27:07.159983148Z" level=info msg="StopPodSandbox for \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" returns successfully" Oct 2 19:27:07.160550 env[1110]: time="2023-10-02T19:27:07.160521252Z" level=info msg="RemovePodSandbox for \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\"" Oct 2 19:27:07.160613 env[1110]: time="2023-10-02T19:27:07.160556449Z" level=info msg="Forcibly stopping sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\"" Oct 2 19:27:07.160644 env[1110]: time="2023-10-02T19:27:07.160635146Z" level=info msg="TearDown network for sandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" successfully" Oct 2 19:27:07.163969 kubelet[1416]: I1002 19:27:07.163947 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8klr9" nodeCondition=["DiskPressure"] Oct 2 19:27:07.165157 env[1110]: time="2023-10-02T19:27:07.165125606Z" level=info msg="RemovePodSandbox \"388d221a53106c4ae130bfa859ffbadcd3535cfc04022dbb019d874d869e0cad\" returns successfully" Oct 2 19:27:07.165675 kubelet[1416]: I1002 19:27:07.165654 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:07.176706 kubelet[1416]: I1002 19:27:07.176677 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:07.176856 kubelet[1416]: I1002 19:27:07.176762 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","default/nginx-deployment-6d5f899847-nztm7","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-kq6xj","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:07.176856 kubelet[1416]: E1002 19:27:07.176825 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:07.177241 env[1110]: time="2023-10-02T19:27:07.177156343Z" level=info msg="StopContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" with timeout 30 (s)" Oct 2 19:27:07.177478 env[1110]: time="2023-10-02T19:27:07.177453483Z" level=info msg="Stop container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" with signal quit" Oct 2 19:27:07.195172 systemd[1]: cri-containerd-e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb.scope: Deactivated successfully. Oct 2 19:27:07.193000 audit: BPF prog-id=128 op=UNLOAD Oct 2 19:27:07.196806 kernel: audit: type=1334 audit(1696274827.193:904): prog-id=128 op=UNLOAD Oct 2 19:27:07.198000 audit: BPF prog-id=131 op=UNLOAD Oct 2 19:27:07.201808 kernel: audit: type=1334 audit(1696274827.198:905): prog-id=131 op=UNLOAD Oct 2 19:27:07.210903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb-rootfs.mount: Deactivated successfully. Oct 2 19:27:07.218420 env[1110]: time="2023-10-02T19:27:07.218362668Z" level=info msg="shim disconnected" id=e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb Oct 2 19:27:07.218420 env[1110]: time="2023-10-02T19:27:07.218415767Z" level=warning msg="cleaning up after shim disconnected" id=e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb namespace=k8s.io Oct 2 19:27:07.218576 env[1110]: time="2023-10-02T19:27:07.218429263Z" level=info msg="cleaning up dead shim" Oct 2 19:27:07.224817 env[1110]: time="2023-10-02T19:27:07.224794656Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3856 runtime=io.containerd.runc.v2\n" Oct 2 19:27:07.227544 env[1110]: time="2023-10-02T19:27:07.227516503Z" level=info msg="StopContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" returns successfully" Oct 2 19:27:07.228060 env[1110]: time="2023-10-02T19:27:07.228017758Z" level=info msg="StopPodSandbox for \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\"" Oct 2 19:27:07.228060 env[1110]: time="2023-10-02T19:27:07.228067762Z" level=info msg="Container to stop \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:27:07.230824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3-shm.mount: Deactivated successfully. Oct 2 19:27:07.234113 systemd[1]: cri-containerd-27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3.scope: Deactivated successfully. Oct 2 19:27:07.232000 audit: BPF prog-id=104 op=UNLOAD Oct 2 19:27:07.235809 kernel: audit: type=1334 audit(1696274827.232:906): prog-id=104 op=UNLOAD Oct 2 19:27:07.239000 audit: BPF prog-id=107 op=UNLOAD Oct 2 19:27:07.241812 kernel: audit: type=1334 audit(1696274827.239:907): prog-id=107 op=UNLOAD Oct 2 19:27:07.252470 env[1110]: time="2023-10-02T19:27:07.252406347Z" level=info msg="shim disconnected" id=27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3 Oct 2 19:27:07.252470 env[1110]: time="2023-10-02T19:27:07.252458325Z" level=warning msg="cleaning up after shim disconnected" id=27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3 namespace=k8s.io Oct 2 19:27:07.252470 env[1110]: time="2023-10-02T19:27:07.252466831Z" level=info msg="cleaning up dead shim" Oct 2 19:27:07.258816 env[1110]: time="2023-10-02T19:27:07.258759056Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3886 runtime=io.containerd.runc.v2\n" Oct 2 19:27:07.295360 systemd-networkd[1011]: cali49ab8b9563e: Link DOWN Oct 2 19:27:07.295370 systemd-networkd[1011]: cali49ab8b9563e: Lost carrier Oct 2 19:27:07.314479 kubelet[1416]: I1002 19:27:07.314431 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mrbkp" nodeCondition=["DiskPressure"] Oct 2 19:27:07.326903 kubelet[1416]: I1002 19:27:07.326869 1416 scope.go:117] "RemoveContainer" containerID="e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb" Oct 2 19:27:07.327815 env[1110]: time="2023-10-02T19:27:07.327761208Z" level=info msg="RemoveContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\"" Oct 2 19:27:07.330703 env[1110]: time="2023-10-02T19:27:07.330679086Z" level=info msg="RemoveContainer for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" returns successfully" Oct 2 19:27:07.330852 kubelet[1416]: I1002 19:27:07.330830 1416 scope.go:117] "RemoveContainer" containerID="e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb" Oct 2 19:27:07.331050 env[1110]: time="2023-10-02T19:27:07.330993429Z" level=error msg="ContainerStatus for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": not found" Oct 2 19:27:07.331172 kubelet[1416]: E1002 19:27:07.331152 1416 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": not found" containerID="e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb" Oct 2 19:27:07.331229 kubelet[1416]: I1002 19:27:07.331193 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb"} err="failed to get container status \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": not found" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.294 [INFO][3914] k8s.go 576: Cleaning up netns ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.294 [INFO][3914] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" iface="eth0" netns="/var/run/netns/cni-b745eb64-2db9-e1c5-5f29-4afd0bfa4392" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.294 [INFO][3914] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" iface="eth0" netns="/var/run/netns/cni-b745eb64-2db9-e1c5-5f29-4afd0bfa4392" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.313 [INFO][3914] dataplane_linux.go 569: Deleted device in netns. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" after=19.384302ms iface="eth0" netns="/var/run/netns/cni-b745eb64-2db9-e1c5-5f29-4afd0bfa4392" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.314 [INFO][3914] k8s.go 583: Releasing IP address(es) ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.314 [INFO][3914] utils.go 196: Calico CNI releasing IP address ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.335 [INFO][3924] ipam_plugin.go 416: Releasing address using handleID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:07.369452 env[1110]: time="2023-10-02T19:27:07Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:07.369452 env[1110]: time="2023-10-02T19:27:07Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.364 [INFO][3924] ipam_plugin.go 435: Released address using handleID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.364 [INFO][3924] ipam_plugin.go 444: Releasing address using workloadID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:07.369452 env[1110]: time="2023-10-02T19:27:07Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:07.369452 env[1110]: 2023-10-02 19:27:07.368 [INFO][3914] k8s.go 589: Teardown processing complete. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:07.370077 env[1110]: time="2023-10-02T19:27:07.369616707Z" level=info msg="TearDown network for sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" successfully" Oct 2 19:27:07.370077 env[1110]: time="2023-10-02T19:27:07.369648526Z" level=info msg="StopPodSandbox for \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" returns successfully" Oct 2 19:27:07.373400 kubelet[1416]: I1002 19:27:07.373365 1416 eviction_manager.go:592] "Eviction manager: pod is evicted successfully" pod="default/nginx-deployment-6d5f899847-nztm7" Oct 2 19:27:07.373400 kubelet[1416]: I1002 19:27:07.373396 1416 eviction_manager.go:201] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["default/nginx-deployment-6d5f899847-nztm7"] Oct 2 19:27:07.382134 kubelet[1416]: I1002 19:27:07.382116 1416 scope.go:117] "RemoveContainer" containerID="e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb" Oct 2 19:27:07.382360 env[1110]: time="2023-10-02T19:27:07.382290424Z" level=error msg="ContainerStatus for \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": not found" Oct 2 19:27:07.382532 kubelet[1416]: I1002 19:27:07.382514 1416 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb"} err="failed to get container status \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7f850958bcfc2329e851ebacbef894a6ef8dc7fb19011c96cc80fa7f8cb03cb\": not found" Oct 2 19:27:07.393000 audit[3938]: NETFILTER_CFG table=filter:81 family=2 entries=38 op=nft_register_rule pid=3938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:27:07.393000 audit[3938]: SYSCALL arch=c000003e syscall=46 success=yes exit=4464 a0=3 a1=7ffd6682f570 a2=0 a3=7ffd6682f55c items=0 ppid=2373 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:07.400155 kernel: audit: type=1325 audit(1696274827.393:908): table=filter:81 family=2 entries=38 op=nft_register_rule pid=3938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:27:07.400364 kernel: audit: type=1300 audit(1696274827.393:908): arch=c000003e syscall=46 success=yes exit=4464 a0=3 a1=7ffd6682f570 a2=0 a3=7ffd6682f55c items=0 ppid=2373 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:07.393000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:27:07.394000 audit[3938]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_unregister_chain pid=3938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 2 19:27:07.394000 audit[3938]: SYSCALL arch=c000003e syscall=46 success=yes exit=848 a0=3 a1=7ffd6682f570 a2=0 a3=55980e553000 items=0 ppid=2373 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:07.394000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 2 19:27:07.440611 kubelet[1416]: I1002 19:27:07.440574 1416 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dplbr\" (UniqueName: \"kubernetes.io/projected/d465276a-936e-4514-bd15-fe7cf64b503d-kube-api-access-dplbr\") pod \"d465276a-936e-4514-bd15-fe7cf64b503d\" (UID: \"d465276a-936e-4514-bd15-fe7cf64b503d\") " Oct 2 19:27:07.442915 kubelet[1416]: I1002 19:27:07.442868 1416 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d465276a-936e-4514-bd15-fe7cf64b503d-kube-api-access-dplbr" (OuterVolumeSpecName: "kube-api-access-dplbr") pod "d465276a-936e-4514-bd15-fe7cf64b503d" (UID: "d465276a-936e-4514-bd15-fe7cf64b503d"). InnerVolumeSpecName "kube-api-access-dplbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:27:07.464663 kubelet[1416]: I1002 19:27:07.464619 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nv6jb" nodeCondition=["DiskPressure"] Oct 2 19:27:07.495153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3-rootfs.mount: Deactivated successfully. Oct 2 19:27:07.495256 systemd[1]: run-netns-cni\x2db745eb64\x2d2db9\x2de1c5\x2d5f29\x2d4afd0bfa4392.mount: Deactivated successfully. Oct 2 19:27:07.495313 systemd[1]: var-lib-kubelet-pods-d465276a\x2d936e\x2d4514\x2dbd15\x2dfe7cf64b503d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddplbr.mount: Deactivated successfully. Oct 2 19:27:07.540974 kubelet[1416]: I1002 19:27:07.540924 1416 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dplbr\" (UniqueName: \"kubernetes.io/projected/d465276a-936e-4514-bd15-fe7cf64b503d-kube-api-access-dplbr\") on node \"10.0.0.19\" DevicePath \"\"" Oct 2 19:27:07.714583 kubelet[1416]: I1002 19:27:07.714226 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qm2rg" nodeCondition=["DiskPressure"] Oct 2 19:27:07.855064 kubelet[1416]: E1002 19:27:07.855007 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:07.963034 kubelet[1416]: I1002 19:27:07.962978 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vjrtb" nodeCondition=["DiskPressure"] Oct 2 19:27:08.119214 kubelet[1416]: I1002 19:27:08.119173 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wmf65" nodeCondition=["DiskPressure"] Oct 2 19:27:08.263474 kubelet[1416]: I1002 19:27:08.263423 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pmgvq" nodeCondition=["DiskPressure"] Oct 2 19:27:08.333918 systemd[1]: Removed slice kubepods-besteffort-podd465276a_936e_4514_bd15_fe7cf64b503d.slice. Oct 2 19:27:08.374613 kubelet[1416]: I1002 19:27:08.374461 1416 eviction_manager.go:423] "Eviction manager: pods successfully cleaned up" pods=["default/nginx-deployment-6d5f899847-nztm7"] Oct 2 19:27:08.383960 kubelet[1416]: I1002 19:27:08.383930 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:08.384067 kubelet[1416]: I1002 19:27:08.383975 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:08.385450 env[1110]: time="2023-10-02T19:27:08.385421076Z" level=info msg="StopPodSandbox for \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\"" Oct 2 19:27:08.413154 kubelet[1416]: I1002 19:27:08.413105 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mdsgl" nodeCondition=["DiskPressure"] Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.422 [INFO][3955] k8s.go 576: Cleaning up netns ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.422 [INFO][3955] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" iface="eth0" netns="" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.422 [INFO][3955] k8s.go 583: Releasing IP address(es) ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.422 [INFO][3955] utils.go 196: Calico CNI releasing IP address ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.439 [INFO][3965] ipam_plugin.go 416: Releasing address using handleID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.448810 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:08.448810 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.445 [WARNING][3965] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.445 [INFO][3965] ipam_plugin.go 444: Releasing address using workloadID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.448810 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:08.448810 env[1110]: 2023-10-02 19:27:08.447 [INFO][3955] k8s.go 589: Teardown processing complete. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.449296 env[1110]: time="2023-10-02T19:27:08.448849032Z" level=info msg="TearDown network for sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" successfully" Oct 2 19:27:08.449296 env[1110]: time="2023-10-02T19:27:08.448890131Z" level=info msg="StopPodSandbox for \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" returns successfully" Oct 2 19:27:08.449504 env[1110]: time="2023-10-02T19:27:08.449469721Z" level=info msg="RemovePodSandbox for \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\"" Oct 2 19:27:08.449573 env[1110]: time="2023-10-02T19:27:08.449514767Z" level=info msg="Forcibly stopping sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\"" Oct 2 19:27:08.514560 kubelet[1416]: I1002 19:27:08.514503 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l5mlf" nodeCondition=["DiskPressure"] Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.483 [INFO][3987] k8s.go 576: Cleaning up netns ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.483 [INFO][3987] dataplane_linux.go 520: CleanUpNamespace called with no netns name, ignoring. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" iface="eth0" netns="" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.484 [INFO][3987] k8s.go 583: Releasing IP address(es) ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.484 [INFO][3987] utils.go 196: Calico CNI releasing IP address ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.502 [INFO][3994] ipam_plugin.go 416: Releasing address using handleID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.514734 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="About to acquire host-wide IPAM lock." source="ipam_plugin.go:357" Oct 2 19:27:08.514734 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="Acquired host-wide IPAM lock." source="ipam_plugin.go:372" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.508 [WARNING][3994] ipam_plugin.go 433: Asked to release address but it doesn't exist. Ignoring ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.508 [INFO][3994] ipam_plugin.go 444: Releasing address using workloadID ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" HandleID="k8s-pod-network.27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Workload="10.0.0.19-k8s-nginx--deployment--6d5f899847--nztm7-eth0" Oct 2 19:27:08.514734 env[1110]: time="2023-10-02T19:27:08Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:378" Oct 2 19:27:08.514734 env[1110]: 2023-10-02 19:27:08.512 [INFO][3987] k8s.go 589: Teardown processing complete. ContainerID="27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3" Oct 2 19:27:08.515015 env[1110]: time="2023-10-02T19:27:08.514712848Z" level=info msg="TearDown network for sandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" successfully" Oct 2 19:27:08.517603 env[1110]: time="2023-10-02T19:27:08.517567244Z" level=info msg="RemovePodSandbox \"27f9413f33446aea4a87e00d4453e0f3066c9cb6461a3aa6de5ca24209c300f3\" returns successfully" Oct 2 19:27:08.518039 kubelet[1416]: I1002 19:27:08.518011 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:08.530349 kubelet[1416]: I1002 19:27:08.530293 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:08.530529 kubelet[1416]: I1002 19:27:08.530370 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-kq6xj","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530397 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530411 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530422 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530430 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530439 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:08.530529 kubelet[1416]: E1002 19:27:08.530447 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:08.530529 kubelet[1416]: I1002 19:27:08.530457 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:08.664414 kubelet[1416]: I1002 19:27:08.664258 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-98xfh" nodeCondition=["DiskPressure"] Oct 2 19:27:08.763264 kubelet[1416]: I1002 19:27:08.763227 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rh8wm" nodeCondition=["DiskPressure"] Oct 2 19:27:08.855292 kubelet[1416]: E1002 19:27:08.855227 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:08.863537 kubelet[1416]: I1002 19:27:08.863500 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6mc6h" nodeCondition=["DiskPressure"] Oct 2 19:27:08.964804 kubelet[1416]: I1002 19:27:08.964637 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rrft5" nodeCondition=["DiskPressure"] Oct 2 19:27:09.065445 kubelet[1416]: I1002 19:27:09.065386 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vg244" nodeCondition=["DiskPressure"] Oct 2 19:27:09.163895 kubelet[1416]: I1002 19:27:09.163851 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-69mqj" nodeCondition=["DiskPressure"] Oct 2 19:27:09.263392 kubelet[1416]: I1002 19:27:09.263262 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xvkgb" nodeCondition=["DiskPressure"] Oct 2 19:27:09.365612 kubelet[1416]: I1002 19:27:09.365576 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g6hcl" nodeCondition=["DiskPressure"] Oct 2 19:27:09.464292 kubelet[1416]: I1002 19:27:09.464252 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mxdf4" nodeCondition=["DiskPressure"] Oct 2 19:27:09.566616 kubelet[1416]: I1002 19:27:09.566496 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xwv72" nodeCondition=["DiskPressure"] Oct 2 19:27:09.663303 kubelet[1416]: I1002 19:27:09.663265 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9nqg2" nodeCondition=["DiskPressure"] Oct 2 19:27:09.764165 kubelet[1416]: I1002 19:27:09.764107 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r6dh8" nodeCondition=["DiskPressure"] Oct 2 19:27:09.855902 kubelet[1416]: E1002 19:27:09.855855 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:09.864183 kubelet[1416]: I1002 19:27:09.864159 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-czzgm" nodeCondition=["DiskPressure"] Oct 2 19:27:09.964043 kubelet[1416]: I1002 19:27:09.963966 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xw92q" nodeCondition=["DiskPressure"] Oct 2 19:27:10.063810 kubelet[1416]: I1002 19:27:10.063753 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fgfdz" nodeCondition=["DiskPressure"] Oct 2 19:27:10.165043 kubelet[1416]: I1002 19:27:10.164691 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fw2cr" nodeCondition=["DiskPressure"] Oct 2 19:27:10.215630 kubelet[1416]: I1002 19:27:10.215610 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zjdxv" nodeCondition=["DiskPressure"] Oct 2 19:27:10.314294 kubelet[1416]: I1002 19:27:10.314254 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f2zkx" nodeCondition=["DiskPressure"] Oct 2 19:27:10.413813 kubelet[1416]: I1002 19:27:10.413770 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vkclr" nodeCondition=["DiskPressure"] Oct 2 19:27:10.514179 kubelet[1416]: I1002 19:27:10.513874 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zm2cz" nodeCondition=["DiskPressure"] Oct 2 19:27:10.614263 kubelet[1416]: I1002 19:27:10.614230 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bt24m" nodeCondition=["DiskPressure"] Oct 2 19:27:10.714815 kubelet[1416]: I1002 19:27:10.714760 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ldx4t" nodeCondition=["DiskPressure"] Oct 2 19:27:10.814204 kubelet[1416]: I1002 19:27:10.813914 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5g2bb" nodeCondition=["DiskPressure"] Oct 2 19:27:10.856273 kubelet[1416]: E1002 19:27:10.856226 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:10.915632 kubelet[1416]: I1002 19:27:10.915593 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jc6vs" nodeCondition=["DiskPressure"] Oct 2 19:27:11.014459 kubelet[1416]: I1002 19:27:11.014417 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2jrkn" nodeCondition=["DiskPressure"] Oct 2 19:27:11.115084 kubelet[1416]: I1002 19:27:11.115047 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-85bh5" nodeCondition=["DiskPressure"] Oct 2 19:27:11.214319 kubelet[1416]: I1002 19:27:11.214274 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9k6k6" nodeCondition=["DiskPressure"] Oct 2 19:27:11.313239 kubelet[1416]: I1002 19:27:11.313185 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bcs22" nodeCondition=["DiskPressure"] Oct 2 19:27:11.414909 kubelet[1416]: I1002 19:27:11.414766 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gq8rw" nodeCondition=["DiskPressure"] Oct 2 19:27:11.514225 kubelet[1416]: I1002 19:27:11.514172 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vfw4q" nodeCondition=["DiskPressure"] Oct 2 19:27:11.615341 kubelet[1416]: I1002 19:27:11.615285 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pgh6r" nodeCondition=["DiskPressure"] Oct 2 19:27:11.663965 kubelet[1416]: I1002 19:27:11.663917 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2dsgc" nodeCondition=["DiskPressure"] Oct 2 19:27:11.763530 kubelet[1416]: I1002 19:27:11.763384 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-79lt4" nodeCondition=["DiskPressure"] Oct 2 19:27:11.857034 kubelet[1416]: E1002 19:27:11.856978 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:11.864280 kubelet[1416]: I1002 19:27:11.864243 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b4f5z" nodeCondition=["DiskPressure"] Oct 2 19:27:11.963691 kubelet[1416]: I1002 19:27:11.963649 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cshdm" nodeCondition=["DiskPressure"] Oct 2 19:27:12.065091 kubelet[1416]: I1002 19:27:12.064957 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5qh59" nodeCondition=["DiskPressure"] Oct 2 19:27:12.114985 kubelet[1416]: I1002 19:27:12.114961 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x49j4" nodeCondition=["DiskPressure"] Oct 2 19:27:12.215372 kubelet[1416]: I1002 19:27:12.215316 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kbbbs" nodeCondition=["DiskPressure"] Oct 2 19:27:12.414628 kubelet[1416]: I1002 19:27:12.414596 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qdfg6" nodeCondition=["DiskPressure"] Oct 2 19:27:12.516322 kubelet[1416]: I1002 19:27:12.516264 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m7gmm" nodeCondition=["DiskPressure"] Oct 2 19:27:12.613771 kubelet[1416]: I1002 19:27:12.613719 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d4x4t" nodeCondition=["DiskPressure"] Oct 2 19:27:12.715048 kubelet[1416]: I1002 19:27:12.714919 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kt4k6" nodeCondition=["DiskPressure"] Oct 2 19:27:12.763120 kubelet[1416]: I1002 19:27:12.763063 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-52wfx" nodeCondition=["DiskPressure"] Oct 2 19:27:12.857811 kubelet[1416]: E1002 19:27:12.857734 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:12.864718 kubelet[1416]: I1002 19:27:12.864688 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6n829" nodeCondition=["DiskPressure"] Oct 2 19:27:12.966229 kubelet[1416]: I1002 19:27:12.965981 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kkj7w" nodeCondition=["DiskPressure"] Oct 2 19:27:13.014294 kubelet[1416]: I1002 19:27:13.014245 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rlxdl" nodeCondition=["DiskPressure"] Oct 2 19:27:13.116094 kubelet[1416]: I1002 19:27:13.116045 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w8cdl" nodeCondition=["DiskPressure"] Oct 2 19:27:13.215332 kubelet[1416]: I1002 19:27:13.215273 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8cvbx" nodeCondition=["DiskPressure"] Oct 2 19:27:13.314092 kubelet[1416]: I1002 19:27:13.313804 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rcm95" nodeCondition=["DiskPressure"] Oct 2 19:27:13.414766 kubelet[1416]: I1002 19:27:13.414717 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-788cl" nodeCondition=["DiskPressure"] Oct 2 19:27:13.463774 kubelet[1416]: I1002 19:27:13.463724 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tvt5n" nodeCondition=["DiskPressure"] Oct 2 19:27:13.564755 kubelet[1416]: I1002 19:27:13.564611 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vxk42" nodeCondition=["DiskPressure"] Oct 2 19:27:13.665556 kubelet[1416]: I1002 19:27:13.665507 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vvpzj" nodeCondition=["DiskPressure"] Oct 2 19:27:13.765175 kubelet[1416]: I1002 19:27:13.765133 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5fcbq" nodeCondition=["DiskPressure"] Oct 2 19:27:13.858013 kubelet[1416]: E1002 19:27:13.857957 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:13.965067 kubelet[1416]: I1002 19:27:13.965009 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l8hvk" nodeCondition=["DiskPressure"] Oct 2 19:27:14.067439 kubelet[1416]: I1002 19:27:14.067373 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r7rjw" nodeCondition=["DiskPressure"] Oct 2 19:27:14.164940 kubelet[1416]: I1002 19:27:14.164800 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zb5xp" nodeCondition=["DiskPressure"] Oct 2 19:27:14.266577 kubelet[1416]: I1002 19:27:14.266513 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bl9lw" nodeCondition=["DiskPressure"] Oct 2 19:27:14.364563 kubelet[1416]: I1002 19:27:14.364516 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qg9hf" nodeCondition=["DiskPressure"] Oct 2 19:27:14.464667 kubelet[1416]: I1002 19:27:14.464526 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hm556" nodeCondition=["DiskPressure"] Oct 2 19:27:14.567165 kubelet[1416]: I1002 19:27:14.567115 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rkwjq" nodeCondition=["DiskPressure"] Oct 2 19:27:14.766229 kubelet[1416]: I1002 19:27:14.766085 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-47clq" nodeCondition=["DiskPressure"] Oct 2 19:27:14.819727 kubelet[1416]: E1002 19:27:14.819682 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.858426 kubelet[1416]: E1002 19:27:14.858374 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.864452 kubelet[1416]: I1002 19:27:14.864421 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-92l4x" nodeCondition=["DiskPressure"] Oct 2 19:27:14.914978 env[1110]: time="2023-10-02T19:27:14.914932318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:27:14.964607 kubelet[1416]: I1002 19:27:14.964557 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fqhj6" nodeCondition=["DiskPressure"] Oct 2 19:27:15.064275 kubelet[1416]: I1002 19:27:15.064148 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-c2lk9" nodeCondition=["DiskPressure"] Oct 2 19:27:15.165122 kubelet[1416]: I1002 19:27:15.165059 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hdnps" nodeCondition=["DiskPressure"] Oct 2 19:27:15.189312 env[1110]: time="2023-10-02T19:27:15.189250655Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:15.190307 env[1110]: time="2023-10-02T19:27:15.190279840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:15.190510 kubelet[1416]: E1002 19:27:15.190489 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:15.190633 kubelet[1416]: E1002 19:27:15.190609 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:15.190724 kubelet[1416]: E1002 19:27:15.190705 1416 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:15.191313 env[1110]: time="2023-10-02T19:27:15.191293426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:27:15.263855 kubelet[1416]: I1002 19:27:15.263810 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-szwfc" nodeCondition=["DiskPressure"] Oct 2 19:27:15.364870 kubelet[1416]: I1002 19:27:15.364833 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tfvfl" nodeCondition=["DiskPressure"] Oct 2 19:27:15.424514 env[1110]: time="2023-10-02T19:27:15.424444426Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:15.425543 env[1110]: time="2023-10-02T19:27:15.425501454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:15.425818 kubelet[1416]: E1002 19:27:15.425777 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:15.425899 kubelet[1416]: E1002 19:27:15.425846 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:15.425977 kubelet[1416]: E1002 19:27:15.425962 1416 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:15.426064 kubelet[1416]: E1002 19:27:15.426032 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:15.465373 kubelet[1416]: I1002 19:27:15.465349 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lwch4" nodeCondition=["DiskPressure"] Oct 2 19:27:15.565775 kubelet[1416]: I1002 19:27:15.565730 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vljr2" nodeCondition=["DiskPressure"] Oct 2 19:27:15.765689 kubelet[1416]: I1002 19:27:15.765569 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-frd9k" nodeCondition=["DiskPressure"] Oct 2 19:27:15.859086 kubelet[1416]: E1002 19:27:15.859043 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:15.866569 kubelet[1416]: I1002 19:27:15.866525 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fv4cz" nodeCondition=["DiskPressure"] Oct 2 19:27:15.965131 kubelet[1416]: I1002 19:27:15.965085 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n9jv4" nodeCondition=["DiskPressure"] Oct 2 19:27:16.166572 kubelet[1416]: I1002 19:27:16.166526 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hn4mg" nodeCondition=["DiskPressure"] Oct 2 19:27:16.265996 kubelet[1416]: I1002 19:27:16.265937 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w6pq2" nodeCondition=["DiskPressure"] Oct 2 19:27:16.314435 kubelet[1416]: I1002 19:27:16.314379 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tm6bt" nodeCondition=["DiskPressure"] Oct 2 19:27:16.424229 kubelet[1416]: I1002 19:27:16.424079 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xcxfk" nodeCondition=["DiskPressure"] Oct 2 19:27:16.516886 kubelet[1416]: I1002 19:27:16.516811 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-swq77" nodeCondition=["DiskPressure"] Oct 2 19:27:16.615443 kubelet[1416]: I1002 19:27:16.615385 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tffbb" nodeCondition=["DiskPressure"] Oct 2 19:27:16.715593 kubelet[1416]: I1002 19:27:16.715462 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tg45v" nodeCondition=["DiskPressure"] Oct 2 19:27:16.815187 kubelet[1416]: I1002 19:27:16.815132 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qqt89" nodeCondition=["DiskPressure"] Oct 2 19:27:16.859580 kubelet[1416]: E1002 19:27:16.859520 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:16.916415 kubelet[1416]: I1002 19:27:16.916362 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2c2fz" nodeCondition=["DiskPressure"] Oct 2 19:27:17.017361 kubelet[1416]: I1002 19:27:17.017237 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x678f" nodeCondition=["DiskPressure"] Oct 2 19:27:17.116976 kubelet[1416]: I1002 19:27:17.116924 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xkzlr" nodeCondition=["DiskPressure"] Oct 2 19:27:17.218720 kubelet[1416]: I1002 19:27:17.218663 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g92cg" nodeCondition=["DiskPressure"] Oct 2 19:27:17.317544 kubelet[1416]: I1002 19:27:17.317270 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9xx9k" nodeCondition=["DiskPressure"] Oct 2 19:27:17.415375 kubelet[1416]: I1002 19:27:17.415324 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v2dsg" nodeCondition=["DiskPressure"] Oct 2 19:27:17.516474 kubelet[1416]: I1002 19:27:17.516428 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4z9hh" nodeCondition=["DiskPressure"] Oct 2 19:27:17.616119 kubelet[1416]: I1002 19:27:17.616082 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fx6bl" nodeCondition=["DiskPressure"] Oct 2 19:27:17.716553 kubelet[1416]: I1002 19:27:17.716505 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zjnpw" nodeCondition=["DiskPressure"] Oct 2 19:27:17.818185 kubelet[1416]: I1002 19:27:17.818139 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hv6tn" nodeCondition=["DiskPressure"] Oct 2 19:27:17.860365 kubelet[1416]: E1002 19:27:17.860338 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:18.015963 kubelet[1416]: I1002 19:27:18.015840 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4s786" nodeCondition=["DiskPressure"] Oct 2 19:27:18.116517 kubelet[1416]: I1002 19:27:18.116480 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jvnp2" nodeCondition=["DiskPressure"] Oct 2 19:27:18.216114 kubelet[1416]: I1002 19:27:18.216060 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fsvnf" nodeCondition=["DiskPressure"] Oct 2 19:27:18.315953 kubelet[1416]: I1002 19:27:18.315831 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jj8mk" nodeCondition=["DiskPressure"] Oct 2 19:27:18.418223 kubelet[1416]: I1002 19:27:18.418181 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b6dkl" nodeCondition=["DiskPressure"] Oct 2 19:27:18.543182 kubelet[1416]: I1002 19:27:18.543148 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:18.543182 kubelet[1416]: I1002 19:27:18.543181 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:18.544276 kubelet[1416]: I1002 19:27:18.544253 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:18.554567 kubelet[1416]: I1002 19:27:18.554523 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:18.554776 kubelet[1416]: I1002 19:27:18.554611 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554635 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554646 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554655 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554665 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554673 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:18.554776 kubelet[1416]: E1002 19:27:18.554682 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:18.554776 kubelet[1416]: I1002 19:27:18.554691 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:18.619925 kubelet[1416]: I1002 19:27:18.619854 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4dgk6" nodeCondition=["DiskPressure"] Oct 2 19:27:18.717687 kubelet[1416]: I1002 19:27:18.717615 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pbh85" nodeCondition=["DiskPressure"] Oct 2 19:27:18.765960 kubelet[1416]: I1002 19:27:18.765916 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m99p9" nodeCondition=["DiskPressure"] Oct 2 19:27:18.861026 kubelet[1416]: E1002 19:27:18.860966 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:18.865452 kubelet[1416]: I1002 19:27:18.865412 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hmt2h" nodeCondition=["DiskPressure"] Oct 2 19:27:18.965602 kubelet[1416]: I1002 19:27:18.965461 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dczx9" nodeCondition=["DiskPressure"] Oct 2 19:27:19.068550 kubelet[1416]: I1002 19:27:19.068490 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k6gxq" nodeCondition=["DiskPressure"] Oct 2 19:27:19.168582 kubelet[1416]: I1002 19:27:19.168515 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tmhzv" nodeCondition=["DiskPressure"] Oct 2 19:27:19.266673 kubelet[1416]: I1002 19:27:19.266404 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wflqd" nodeCondition=["DiskPressure"] Oct 2 19:27:19.367150 kubelet[1416]: I1002 19:27:19.367104 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xs66f" nodeCondition=["DiskPressure"] Oct 2 19:27:19.466187 kubelet[1416]: I1002 19:27:19.466142 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wdpfd" nodeCondition=["DiskPressure"] Oct 2 19:27:19.566104 kubelet[1416]: I1002 19:27:19.565982 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4pzqj" nodeCondition=["DiskPressure"] Oct 2 19:27:19.615220 kubelet[1416]: I1002 19:27:19.615181 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5vmgs" nodeCondition=["DiskPressure"] Oct 2 19:27:19.715886 kubelet[1416]: I1002 19:27:19.715836 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-km2zh" nodeCondition=["DiskPressure"] Oct 2 19:27:19.815722 kubelet[1416]: I1002 19:27:19.815662 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kdrqs" nodeCondition=["DiskPressure"] Oct 2 19:27:19.861412 kubelet[1416]: E1002 19:27:19.861381 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:19.914211 kubelet[1416]: I1002 19:27:19.914165 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gks8k" nodeCondition=["DiskPressure"] Oct 2 19:27:20.014005 kubelet[1416]: I1002 19:27:20.013970 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-569d4" nodeCondition=["DiskPressure"] Oct 2 19:27:20.063838 kubelet[1416]: I1002 19:27:20.063795 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tl7s8" nodeCondition=["DiskPressure"] Oct 2 19:27:20.166010 kubelet[1416]: I1002 19:27:20.165680 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f96j6" nodeCondition=["DiskPressure"] Oct 2 19:27:20.266101 kubelet[1416]: I1002 19:27:20.266058 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dxn4g" nodeCondition=["DiskPressure"] Oct 2 19:27:20.365932 kubelet[1416]: I1002 19:27:20.365888 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dshnk" nodeCondition=["DiskPressure"] Oct 2 19:27:20.566558 kubelet[1416]: I1002 19:27:20.566442 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nsjs6" nodeCondition=["DiskPressure"] Oct 2 19:27:20.666444 kubelet[1416]: I1002 19:27:20.666412 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-plc65" nodeCondition=["DiskPressure"] Oct 2 19:27:20.767462 kubelet[1416]: I1002 19:27:20.767415 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-npz4k" nodeCondition=["DiskPressure"] Oct 2 19:27:20.862315 kubelet[1416]: E1002 19:27:20.862278 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:20.865382 kubelet[1416]: I1002 19:27:20.865355 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-db75k" nodeCondition=["DiskPressure"] Oct 2 19:27:20.965074 kubelet[1416]: I1002 19:27:20.965034 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xzgqv" nodeCondition=["DiskPressure"] Oct 2 19:27:21.166872 kubelet[1416]: I1002 19:27:21.166730 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mdk6d" nodeCondition=["DiskPressure"] Oct 2 19:27:21.265806 kubelet[1416]: I1002 19:27:21.265753 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2wsnr" nodeCondition=["DiskPressure"] Oct 2 19:27:21.366476 kubelet[1416]: I1002 19:27:21.366441 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-94hcq" nodeCondition=["DiskPressure"] Oct 2 19:27:21.567202 kubelet[1416]: I1002 19:27:21.567070 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zwjkv" nodeCondition=["DiskPressure"] Oct 2 19:27:21.666327 kubelet[1416]: I1002 19:27:21.666278 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gsjnk" nodeCondition=["DiskPressure"] Oct 2 19:27:21.769261 kubelet[1416]: I1002 19:27:21.769217 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5722j" nodeCondition=["DiskPressure"] Oct 2 19:27:21.862722 kubelet[1416]: E1002 19:27:21.862691 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:21.865467 kubelet[1416]: I1002 19:27:21.865438 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-85pp4" nodeCondition=["DiskPressure"] Oct 2 19:27:21.915769 kubelet[1416]: I1002 19:27:21.915722 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-59nvc" nodeCondition=["DiskPressure"] Oct 2 19:27:22.016655 kubelet[1416]: I1002 19:27:22.016611 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-phhz4" nodeCondition=["DiskPressure"] Oct 2 19:27:22.116205 kubelet[1416]: I1002 19:27:22.116101 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ml4ll" nodeCondition=["DiskPressure"] Oct 2 19:27:22.217817 kubelet[1416]: I1002 19:27:22.217777 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-phmrv" nodeCondition=["DiskPressure"] Oct 2 19:27:22.420181 kubelet[1416]: I1002 19:27:22.419927 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nwwqk" nodeCondition=["DiskPressure"] Oct 2 19:27:22.517998 kubelet[1416]: I1002 19:27:22.517940 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4svkl" nodeCondition=["DiskPressure"] Oct 2 19:27:22.615657 kubelet[1416]: I1002 19:27:22.615606 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rmf8p" nodeCondition=["DiskPressure"] Oct 2 19:27:22.816145 kubelet[1416]: I1002 19:27:22.816009 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zg6j6" nodeCondition=["DiskPressure"] Oct 2 19:27:22.863469 kubelet[1416]: E1002 19:27:22.863405 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:22.915846 kubelet[1416]: I1002 19:27:22.915800 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lt566" nodeCondition=["DiskPressure"] Oct 2 19:27:23.015989 kubelet[1416]: I1002 19:27:23.015942 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n2mk7" nodeCondition=["DiskPressure"] Oct 2 19:27:23.116453 kubelet[1416]: I1002 19:27:23.116414 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6h4gc" nodeCondition=["DiskPressure"] Oct 2 19:27:23.217381 kubelet[1416]: I1002 19:27:23.217323 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fvgss" nodeCondition=["DiskPressure"] Oct 2 19:27:23.416881 kubelet[1416]: I1002 19:27:23.416735 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7ndnv" nodeCondition=["DiskPressure"] Oct 2 19:27:23.516120 kubelet[1416]: I1002 19:27:23.516070 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n7ftt" nodeCondition=["DiskPressure"] Oct 2 19:27:23.617741 kubelet[1416]: I1002 19:27:23.617675 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zp2fr" nodeCondition=["DiskPressure"] Oct 2 19:27:23.720574 kubelet[1416]: I1002 19:27:23.720410 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hjlks" nodeCondition=["DiskPressure"] Oct 2 19:27:23.766964 kubelet[1416]: I1002 19:27:23.766907 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x77jq" nodeCondition=["DiskPressure"] Oct 2 19:27:23.864549 kubelet[1416]: E1002 19:27:23.864491 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:23.865830 kubelet[1416]: I1002 19:27:23.865782 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-448v7" nodeCondition=["DiskPressure"] Oct 2 19:27:24.065635 kubelet[1416]: I1002 19:27:24.065487 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5lnz7" nodeCondition=["DiskPressure"] Oct 2 19:27:24.167086 kubelet[1416]: I1002 19:27:24.167040 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qr82f" nodeCondition=["DiskPressure"] Oct 2 19:27:24.270092 kubelet[1416]: I1002 19:27:24.270037 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xmpbc" nodeCondition=["DiskPressure"] Oct 2 19:27:24.468160 kubelet[1416]: I1002 19:27:24.468084 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7xj2g" nodeCondition=["DiskPressure"] Oct 2 19:27:24.570041 kubelet[1416]: I1002 19:27:24.569980 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dzqmk" nodeCondition=["DiskPressure"] Oct 2 19:27:24.672192 kubelet[1416]: I1002 19:27:24.672134 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h892p" nodeCondition=["DiskPressure"] Oct 2 19:27:24.865162 kubelet[1416]: E1002 19:27:24.865108 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.867407 kubelet[1416]: I1002 19:27:24.867374 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-96nwt" nodeCondition=["DiskPressure"] Oct 2 19:27:24.966740 kubelet[1416]: I1002 19:27:24.966703 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tp7tw" nodeCondition=["DiskPressure"] Oct 2 19:27:25.015933 kubelet[1416]: I1002 19:27:25.015895 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pfhgs" nodeCondition=["DiskPressure"] Oct 2 19:27:25.118524 kubelet[1416]: I1002 19:27:25.118233 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wv8rs" nodeCondition=["DiskPressure"] Oct 2 19:27:25.317326 kubelet[1416]: I1002 19:27:25.317269 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qdqcw" nodeCondition=["DiskPressure"] Oct 2 19:27:25.416161 kubelet[1416]: I1002 19:27:25.415884 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tktc4" nodeCondition=["DiskPressure"] Oct 2 19:27:25.518224 kubelet[1416]: I1002 19:27:25.518170 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-78cgl" nodeCondition=["DiskPressure"] Oct 2 19:27:25.617086 kubelet[1416]: I1002 19:27:25.617037 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f24pl" nodeCondition=["DiskPressure"] Oct 2 19:27:25.667223 kubelet[1416]: I1002 19:27:25.667120 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gcwzn" nodeCondition=["DiskPressure"] Oct 2 19:27:25.767084 kubelet[1416]: I1002 19:27:25.767007 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tskj8" nodeCondition=["DiskPressure"] Oct 2 19:27:25.865751 kubelet[1416]: E1002 19:27:25.865705 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:25.868215 kubelet[1416]: I1002 19:27:25.868181 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ms7sm" nodeCondition=["DiskPressure"] Oct 2 19:27:25.918384 kubelet[1416]: I1002 19:27:25.918267 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vdjbf" nodeCondition=["DiskPressure"] Oct 2 19:27:26.018988 kubelet[1416]: I1002 19:27:26.018943 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-75sqx" nodeCondition=["DiskPressure"] Oct 2 19:27:26.117592 kubelet[1416]: I1002 19:27:26.117524 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dvkqs" nodeCondition=["DiskPressure"] Oct 2 19:27:26.218702 kubelet[1416]: I1002 19:27:26.218370 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7p4z6" nodeCondition=["DiskPressure"] Oct 2 19:27:26.416954 kubelet[1416]: I1002 19:27:26.416903 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-58lnv" nodeCondition=["DiskPressure"] Oct 2 19:27:26.517861 kubelet[1416]: I1002 19:27:26.517713 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dwq7k" nodeCondition=["DiskPressure"] Oct 2 19:27:26.617228 kubelet[1416]: I1002 19:27:26.617172 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rlwtl" nodeCondition=["DiskPressure"] Oct 2 19:27:26.817107 kubelet[1416]: I1002 19:27:26.816864 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gb5vb" nodeCondition=["DiskPressure"] Oct 2 19:27:26.866835 kubelet[1416]: E1002 19:27:26.866808 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:26.916333 kubelet[1416]: I1002 19:27:26.916293 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-24dz8" nodeCondition=["DiskPressure"] Oct 2 19:27:27.015540 kubelet[1416]: I1002 19:27:27.015500 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-br46g" nodeCondition=["DiskPressure"] Oct 2 19:27:27.117296 kubelet[1416]: I1002 19:27:27.117263 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tvc25" nodeCondition=["DiskPressure"] Oct 2 19:27:27.216670 kubelet[1416]: I1002 19:27:27.216624 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bpk4x" nodeCondition=["DiskPressure"] Oct 2 19:27:27.316777 kubelet[1416]: I1002 19:27:27.316729 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l9jj9" nodeCondition=["DiskPressure"] Oct 2 19:27:27.416864 kubelet[1416]: I1002 19:27:27.416409 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-68q4j" nodeCondition=["DiskPressure"] Oct 2 19:27:27.518017 kubelet[1416]: I1002 19:27:27.517959 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vdpj2" nodeCondition=["DiskPressure"] Oct 2 19:27:27.622844 kubelet[1416]: I1002 19:27:27.622774 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k29wr" nodeCondition=["DiskPressure"] Oct 2 19:27:27.718603 kubelet[1416]: I1002 19:27:27.718455 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-44mxn" nodeCondition=["DiskPressure"] Oct 2 19:27:27.816949 kubelet[1416]: I1002 19:27:27.816902 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xhszx" nodeCondition=["DiskPressure"] Oct 2 19:27:27.867514 kubelet[1416]: E1002 19:27:27.867467 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:27.917655 kubelet[1416]: I1002 19:27:27.917595 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q9w6f" nodeCondition=["DiskPressure"] Oct 2 19:27:27.968260 kubelet[1416]: I1002 19:27:27.968202 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ckkmw" nodeCondition=["DiskPressure"] Oct 2 19:27:28.068210 kubelet[1416]: I1002 19:27:28.068075 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k6kbw" nodeCondition=["DiskPressure"] Oct 2 19:27:28.167194 kubelet[1416]: I1002 19:27:28.167143 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bxmqn" nodeCondition=["DiskPressure"] Oct 2 19:27:28.269530 kubelet[1416]: I1002 19:27:28.269469 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fdzhp" nodeCondition=["DiskPressure"] Oct 2 19:27:28.469519 kubelet[1416]: I1002 19:27:28.469447 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mt5lb" nodeCondition=["DiskPressure"] Oct 2 19:27:28.567763 kubelet[1416]: I1002 19:27:28.567729 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:28.567763 kubelet[1416]: I1002 19:27:28.567762 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:28.569670 kubelet[1416]: I1002 19:27:28.569644 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:28.575155 kubelet[1416]: I1002 19:27:28.575108 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z6s6c" nodeCondition=["DiskPressure"] Oct 2 19:27:28.581640 kubelet[1416]: I1002 19:27:28.581613 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:28.581838 kubelet[1416]: I1002 19:27:28.581671 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581694 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581705 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581715 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581722 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581731 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:28.581838 kubelet[1416]: E1002 19:27:28.581738 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:28.581838 kubelet[1416]: I1002 19:27:28.581747 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:28.668024 kubelet[1416]: I1002 19:27:28.667974 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ckskp" nodeCondition=["DiskPressure"] Oct 2 19:27:28.868227 kubelet[1416]: E1002 19:27:28.868172 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:28.869425 kubelet[1416]: I1002 19:27:28.869395 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xrzvd" nodeCondition=["DiskPressure"] Oct 2 19:27:28.970701 kubelet[1416]: I1002 19:27:28.970641 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h8szc" nodeCondition=["DiskPressure"] Oct 2 19:27:29.016992 kubelet[1416]: I1002 19:27:29.016939 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xtztk" nodeCondition=["DiskPressure"] Oct 2 19:27:29.118016 kubelet[1416]: I1002 19:27:29.117970 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dwjvx" nodeCondition=["DiskPressure"] Oct 2 19:27:29.218109 kubelet[1416]: I1002 19:27:29.217979 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jcghg" nodeCondition=["DiskPressure"] Oct 2 19:27:29.318323 kubelet[1416]: I1002 19:27:29.318278 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8nn7f" nodeCondition=["DiskPressure"] Oct 2 19:27:29.417901 kubelet[1416]: I1002 19:27:29.417859 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kwjgh" nodeCondition=["DiskPressure"] Oct 2 19:27:29.520503 kubelet[1416]: I1002 19:27:29.520364 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p42rj" nodeCondition=["DiskPressure"] Oct 2 19:27:29.618391 kubelet[1416]: I1002 19:27:29.618353 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pv755" nodeCondition=["DiskPressure"] Oct 2 19:27:29.717814 kubelet[1416]: I1002 19:27:29.717747 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vlr9d" nodeCondition=["DiskPressure"] Oct 2 19:27:29.868644 kubelet[1416]: E1002 19:27:29.868596 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:29.914502 kubelet[1416]: E1002 19:27:29.914442 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:29.918876 kubelet[1416]: I1002 19:27:29.918844 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h5mwn" nodeCondition=["DiskPressure"] Oct 2 19:27:30.019933 kubelet[1416]: I1002 19:27:30.019866 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kmmmh" nodeCondition=["DiskPressure"] Oct 2 19:27:30.119033 kubelet[1416]: I1002 19:27:30.118916 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v8kxb" nodeCondition=["DiskPressure"] Oct 2 19:27:30.318691 kubelet[1416]: I1002 19:27:30.318637 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4g2c5" nodeCondition=["DiskPressure"] Oct 2 19:27:30.417989 kubelet[1416]: I1002 19:27:30.417854 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nwzn9" nodeCondition=["DiskPressure"] Oct 2 19:27:30.524768 kubelet[1416]: I1002 19:27:30.524707 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gq9s5" nodeCondition=["DiskPressure"] Oct 2 19:27:30.719396 kubelet[1416]: I1002 19:27:30.719133 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-44r8p" nodeCondition=["DiskPressure"] Oct 2 19:27:30.817629 kubelet[1416]: I1002 19:27:30.817566 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fmlgb" nodeCondition=["DiskPressure"] Oct 2 19:27:30.868613 kubelet[1416]: I1002 19:27:30.868556 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pn5sl" nodeCondition=["DiskPressure"] Oct 2 19:27:30.868848 kubelet[1416]: E1002 19:27:30.868690 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:30.968527 kubelet[1416]: I1002 19:27:30.968466 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ckk6p" nodeCondition=["DiskPressure"] Oct 2 19:27:31.168283 kubelet[1416]: I1002 19:27:31.168235 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-c9nqt" nodeCondition=["DiskPressure"] Oct 2 19:27:31.267234 kubelet[1416]: I1002 19:27:31.267188 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ctbql" nodeCondition=["DiskPressure"] Oct 2 19:27:31.367628 kubelet[1416]: I1002 19:27:31.367577 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bl254" nodeCondition=["DiskPressure"] Oct 2 19:27:31.467692 kubelet[1416]: I1002 19:27:31.467564 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t4vpt" nodeCondition=["DiskPressure"] Oct 2 19:27:31.567951 kubelet[1416]: I1002 19:27:31.567892 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lpdsb" nodeCondition=["DiskPressure"] Oct 2 19:27:31.767765 kubelet[1416]: I1002 19:27:31.767630 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4w2sk" nodeCondition=["DiskPressure"] Oct 2 19:27:31.867554 kubelet[1416]: I1002 19:27:31.867479 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4bg2k" nodeCondition=["DiskPressure"] Oct 2 19:27:31.869133 kubelet[1416]: E1002 19:27:31.869100 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:31.918743 kubelet[1416]: I1002 19:27:31.918689 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xhj7t" nodeCondition=["DiskPressure"] Oct 2 19:27:32.022409 kubelet[1416]: I1002 19:27:32.022263 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gqmmk" nodeCondition=["DiskPressure"] Oct 2 19:27:32.219029 kubelet[1416]: I1002 19:27:32.218949 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-84nmg" nodeCondition=["DiskPressure"] Oct 2 19:27:32.319062 kubelet[1416]: I1002 19:27:32.318913 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rssfj" nodeCondition=["DiskPressure"] Oct 2 19:27:32.419315 kubelet[1416]: I1002 19:27:32.419257 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l5rnw" nodeCondition=["DiskPressure"] Oct 2 19:27:32.520444 kubelet[1416]: I1002 19:27:32.520388 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vnl6v" nodeCondition=["DiskPressure"] Oct 2 19:27:32.618505 kubelet[1416]: I1002 19:27:32.618455 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cc2mx" nodeCondition=["DiskPressure"] Oct 2 19:27:32.717834 kubelet[1416]: I1002 19:27:32.717769 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2ghss" nodeCondition=["DiskPressure"] Oct 2 19:27:32.819127 kubelet[1416]: I1002 19:27:32.819077 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9xwz4" nodeCondition=["DiskPressure"] Oct 2 19:27:32.869682 kubelet[1416]: E1002 19:27:32.869598 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:32.920991 kubelet[1416]: I1002 19:27:32.920940 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vwc8c" nodeCondition=["DiskPressure"] Oct 2 19:27:33.018447 kubelet[1416]: I1002 19:27:33.018394 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g5x9h" nodeCondition=["DiskPressure"] Oct 2 19:27:33.219181 kubelet[1416]: I1002 19:27:33.218782 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d7xpr" nodeCondition=["DiskPressure"] Oct 2 19:27:33.319178 kubelet[1416]: I1002 19:27:33.319100 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-whdrg" nodeCondition=["DiskPressure"] Oct 2 19:27:33.418242 kubelet[1416]: I1002 19:27:33.418190 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-czdd9" nodeCondition=["DiskPressure"] Oct 2 19:27:33.618954 kubelet[1416]: I1002 19:27:33.618877 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bwzlg" nodeCondition=["DiskPressure"] Oct 2 19:27:33.719674 kubelet[1416]: I1002 19:27:33.719608 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qmj5b" nodeCondition=["DiskPressure"] Oct 2 19:27:33.819046 kubelet[1416]: I1002 19:27:33.818965 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qnz8q" nodeCondition=["DiskPressure"] Oct 2 19:27:33.870728 kubelet[1416]: E1002 19:27:33.870553 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:33.918166 kubelet[1416]: I1002 19:27:33.918102 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n55xn" nodeCondition=["DiskPressure"] Oct 2 19:27:34.019439 kubelet[1416]: I1002 19:27:34.019383 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5fp9c" nodeCondition=["DiskPressure"] Oct 2 19:27:34.118436 kubelet[1416]: I1002 19:27:34.118380 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t2wn5" nodeCondition=["DiskPressure"] Oct 2 19:27:34.218899 kubelet[1416]: I1002 19:27:34.218736 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r8p2h" nodeCondition=["DiskPressure"] Oct 2 19:27:34.318639 kubelet[1416]: I1002 19:27:34.318579 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bn67x" nodeCondition=["DiskPressure"] Oct 2 19:27:34.419507 kubelet[1416]: I1002 19:27:34.419449 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tws9q" nodeCondition=["DiskPressure"] Oct 2 19:27:34.519174 kubelet[1416]: I1002 19:27:34.518862 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6mh7g" nodeCondition=["DiskPressure"] Oct 2 19:27:34.619267 kubelet[1416]: I1002 19:27:34.619217 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v7gqc" nodeCondition=["DiskPressure"] Oct 2 19:27:34.721074 kubelet[1416]: I1002 19:27:34.721008 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qzhfk" nodeCondition=["DiskPressure"] Oct 2 19:27:34.819410 kubelet[1416]: E1002 19:27:34.819232 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.827512 kubelet[1416]: I1002 19:27:34.827459 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vtf7g" nodeCondition=["DiskPressure"] Oct 2 19:27:34.871635 kubelet[1416]: E1002 19:27:34.871565 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.920340 kubelet[1416]: I1002 19:27:34.920289 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2x9rg" nodeCondition=["DiskPressure"] Oct 2 19:27:35.018782 kubelet[1416]: I1002 19:27:35.018732 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kxg6q" nodeCondition=["DiskPressure"] Oct 2 19:27:35.119109 kubelet[1416]: I1002 19:27:35.119031 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p4nqb" nodeCondition=["DiskPressure"] Oct 2 19:27:35.218650 kubelet[1416]: I1002 19:27:35.218576 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b85jx" nodeCondition=["DiskPressure"] Oct 2 19:27:35.318162 kubelet[1416]: I1002 19:27:35.318099 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h7nhh" nodeCondition=["DiskPressure"] Oct 2 19:27:35.419125 kubelet[1416]: I1002 19:27:35.418998 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vgqsh" nodeCondition=["DiskPressure"] Oct 2 19:27:35.521106 kubelet[1416]: I1002 19:27:35.521056 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-thwm7" nodeCondition=["DiskPressure"] Oct 2 19:27:35.619689 kubelet[1416]: I1002 19:27:35.619628 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4lhxd" nodeCondition=["DiskPressure"] Oct 2 19:27:35.818860 kubelet[1416]: I1002 19:27:35.818567 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2fx8r" nodeCondition=["DiskPressure"] Oct 2 19:27:35.871956 kubelet[1416]: E1002 19:27:35.871903 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:35.918028 kubelet[1416]: I1002 19:27:35.917980 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9g6tk" nodeCondition=["DiskPressure"] Oct 2 19:27:36.017954 kubelet[1416]: I1002 19:27:36.017899 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vsspt" nodeCondition=["DiskPressure"] Oct 2 19:27:36.219018 kubelet[1416]: I1002 19:27:36.218963 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wzz9f" nodeCondition=["DiskPressure"] Oct 2 19:27:36.321431 kubelet[1416]: I1002 19:27:36.321366 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5srn9" nodeCondition=["DiskPressure"] Oct 2 19:27:36.418716 kubelet[1416]: I1002 19:27:36.418674 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fzkwg" nodeCondition=["DiskPressure"] Oct 2 19:27:36.518772 kubelet[1416]: I1002 19:27:36.518480 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-48thp" nodeCondition=["DiskPressure"] Oct 2 19:27:36.618432 kubelet[1416]: I1002 19:27:36.618372 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-59kz2" nodeCondition=["DiskPressure"] Oct 2 19:27:36.718252 kubelet[1416]: I1002 19:27:36.718192 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5lzpt" nodeCondition=["DiskPressure"] Oct 2 19:27:36.819337 kubelet[1416]: I1002 19:27:36.819182 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7tkm5" nodeCondition=["DiskPressure"] Oct 2 19:27:36.872650 kubelet[1416]: E1002 19:27:36.872590 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:36.924813 kubelet[1416]: I1002 19:27:36.924769 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-snm9n" nodeCondition=["DiskPressure"] Oct 2 19:27:37.020437 kubelet[1416]: I1002 19:27:37.020376 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6mkf4" nodeCondition=["DiskPressure"] Oct 2 19:27:37.119587 kubelet[1416]: I1002 19:27:37.119525 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-774kv" nodeCondition=["DiskPressure"] Oct 2 19:27:37.218807 kubelet[1416]: I1002 19:27:37.218732 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-59r92" nodeCondition=["DiskPressure"] Oct 2 19:27:37.322659 kubelet[1416]: I1002 19:27:37.322597 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xvhxv" nodeCondition=["DiskPressure"] Oct 2 19:27:37.419865 kubelet[1416]: I1002 19:27:37.419689 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d7wnt" nodeCondition=["DiskPressure"] Oct 2 19:27:37.519269 kubelet[1416]: I1002 19:27:37.519208 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4v9gl" nodeCondition=["DiskPressure"] Oct 2 19:27:37.620669 kubelet[1416]: I1002 19:27:37.620605 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bjk9v" nodeCondition=["DiskPressure"] Oct 2 19:27:37.825543 kubelet[1416]: I1002 19:27:37.825379 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9lqhg" nodeCondition=["DiskPressure"] Oct 2 19:27:37.873515 kubelet[1416]: E1002 19:27:37.873464 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:37.918094 kubelet[1416]: I1002 19:27:37.918058 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jlhsg" nodeCondition=["DiskPressure"] Oct 2 19:27:37.969466 kubelet[1416]: I1002 19:27:37.969409 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rrhf7" nodeCondition=["DiskPressure"] Oct 2 19:27:38.071588 kubelet[1416]: I1002 19:27:38.071533 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vff89" nodeCondition=["DiskPressure"] Oct 2 19:27:38.169081 kubelet[1416]: I1002 19:27:38.169017 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n7d5p" nodeCondition=["DiskPressure"] Oct 2 19:27:38.270023 kubelet[1416]: I1002 19:27:38.269961 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vxdzk" nodeCondition=["DiskPressure"] Oct 2 19:27:38.369158 kubelet[1416]: I1002 19:27:38.369112 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w44v7" nodeCondition=["DiskPressure"] Oct 2 19:27:38.469437 kubelet[1416]: I1002 19:27:38.469293 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tw29r" nodeCondition=["DiskPressure"] Oct 2 19:27:38.574249 kubelet[1416]: I1002 19:27:38.574182 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wbpd5" nodeCondition=["DiskPressure"] Oct 2 19:27:38.595147 kubelet[1416]: I1002 19:27:38.595114 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:38.595147 kubelet[1416]: I1002 19:27:38.595144 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:38.596563 kubelet[1416]: I1002 19:27:38.596527 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:38.605534 kubelet[1416]: I1002 19:27:38.605508 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:38.605671 kubelet[1416]: I1002 19:27:38.605598 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-8c5qr","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:38.605671 kubelet[1416]: E1002 19:27:38.605629 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:38.605671 kubelet[1416]: E1002 19:27:38.605646 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:38.605671 kubelet[1416]: E1002 19:27:38.605658 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:38.605671 kubelet[1416]: E1002 19:27:38.605670 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:38.605877 kubelet[1416]: E1002 19:27:38.605681 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:38.605877 kubelet[1416]: E1002 19:27:38.605693 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:38.605877 kubelet[1416]: I1002 19:27:38.605706 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:38.670920 kubelet[1416]: I1002 19:27:38.670859 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dh5j9" nodeCondition=["DiskPressure"] Oct 2 19:27:38.769340 kubelet[1416]: I1002 19:27:38.769185 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m9x8w" nodeCondition=["DiskPressure"] Oct 2 19:27:38.822416 kubelet[1416]: I1002 19:27:38.822362 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qlt7c" nodeCondition=["DiskPressure"] Oct 2 19:27:38.873947 kubelet[1416]: E1002 19:27:38.873894 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:38.920917 kubelet[1416]: I1002 19:27:38.920864 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cptt8" nodeCondition=["DiskPressure"] Oct 2 19:27:39.019297 kubelet[1416]: I1002 19:27:39.019235 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tjzb8" nodeCondition=["DiskPressure"] Oct 2 19:27:39.120081 kubelet[1416]: I1002 19:27:39.120026 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zv6wz" nodeCondition=["DiskPressure"] Oct 2 19:27:39.220350 kubelet[1416]: I1002 19:27:39.220294 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-77cnq" nodeCondition=["DiskPressure"] Oct 2 19:27:39.319579 kubelet[1416]: I1002 19:27:39.319520 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f5f6t" nodeCondition=["DiskPressure"] Oct 2 19:27:39.421113 kubelet[1416]: I1002 19:27:39.420990 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m6bww" nodeCondition=["DiskPressure"] Oct 2 19:27:39.472053 kubelet[1416]: I1002 19:27:39.471966 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9qsjj" nodeCondition=["DiskPressure"] Oct 2 19:27:39.569855 kubelet[1416]: I1002 19:27:39.569780 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-975vh" nodeCondition=["DiskPressure"] Oct 2 19:27:39.673530 kubelet[1416]: I1002 19:27:39.673221 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2gvpp" nodeCondition=["DiskPressure"] Oct 2 19:27:39.770675 kubelet[1416]: I1002 19:27:39.770598 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wvndp" nodeCondition=["DiskPressure"] Oct 2 19:27:39.869057 kubelet[1416]: I1002 19:27:39.868991 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w5cgv" nodeCondition=["DiskPressure"] Oct 2 19:27:39.874330 kubelet[1416]: E1002 19:27:39.874307 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:39.919025 kubelet[1416]: I1002 19:27:39.918985 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tqlpt" nodeCondition=["DiskPressure"] Oct 2 19:27:40.020773 kubelet[1416]: I1002 19:27:40.020641 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-77jf2" nodeCondition=["DiskPressure"] Oct 2 19:27:40.221002 kubelet[1416]: I1002 19:27:40.220919 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hsxct" nodeCondition=["DiskPressure"] Oct 2 19:27:40.325479 kubelet[1416]: I1002 19:27:40.325293 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4kcks" nodeCondition=["DiskPressure"] Oct 2 19:27:40.420274 kubelet[1416]: I1002 19:27:40.420211 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cpk5x" nodeCondition=["DiskPressure"] Oct 2 19:27:40.620644 kubelet[1416]: I1002 19:27:40.620580 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-27h5z" nodeCondition=["DiskPressure"] Oct 2 19:27:40.720329 kubelet[1416]: I1002 19:27:40.720273 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mdbm8" nodeCondition=["DiskPressure"] Oct 2 19:27:40.769554 kubelet[1416]: I1002 19:27:40.769496 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dr99f" nodeCondition=["DiskPressure"] Oct 2 19:27:40.871305 kubelet[1416]: I1002 19:27:40.871167 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pr6b9" nodeCondition=["DiskPressure"] Oct 2 19:27:40.874453 kubelet[1416]: E1002 19:27:40.874428 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:41.069935 kubelet[1416]: I1002 19:27:41.069886 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ffk9g" nodeCondition=["DiskPressure"] Oct 2 19:27:41.173484 kubelet[1416]: I1002 19:27:41.173084 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-296v8" nodeCondition=["DiskPressure"] Oct 2 19:27:41.269449 kubelet[1416]: I1002 19:27:41.269388 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pqmr6" nodeCondition=["DiskPressure"] Oct 2 19:27:41.370651 kubelet[1416]: I1002 19:27:41.370582 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h8rzc" nodeCondition=["DiskPressure"] Oct 2 19:27:41.419985 kubelet[1416]: I1002 19:27:41.419944 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-82qj5" nodeCondition=["DiskPressure"] Oct 2 19:27:41.525934 kubelet[1416]: I1002 19:27:41.525747 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8sn9h" nodeCondition=["DiskPressure"] Oct 2 19:27:41.720040 kubelet[1416]: I1002 19:27:41.719973 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wdxbh" nodeCondition=["DiskPressure"] Oct 2 19:27:41.819286 kubelet[1416]: I1002 19:27:41.819162 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6gmdv" nodeCondition=["DiskPressure"] Oct 2 19:27:41.874766 kubelet[1416]: E1002 19:27:41.874730 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:41.919570 kubelet[1416]: I1002 19:27:41.919532 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nzng6" nodeCondition=["DiskPressure"] Oct 2 19:27:42.020317 kubelet[1416]: I1002 19:27:42.020235 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dgzjp" nodeCondition=["DiskPressure"] Oct 2 19:27:42.118863 kubelet[1416]: I1002 19:27:42.118799 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tvg6r" nodeCondition=["DiskPressure"] Oct 2 19:27:42.220184 kubelet[1416]: I1002 19:27:42.220109 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-n6dz7" nodeCondition=["DiskPressure"] Oct 2 19:27:42.320464 kubelet[1416]: I1002 19:27:42.320403 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q7c7l" nodeCondition=["DiskPressure"] Oct 2 19:27:42.420561 kubelet[1416]: I1002 19:27:42.420269 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-njllg" nodeCondition=["DiskPressure"] Oct 2 19:27:42.521128 kubelet[1416]: I1002 19:27:42.521056 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-87fm7" nodeCondition=["DiskPressure"] Oct 2 19:27:42.619958 kubelet[1416]: I1002 19:27:42.619891 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2n9g8" nodeCondition=["DiskPressure"] Oct 2 19:27:42.721813 kubelet[1416]: I1002 19:27:42.721650 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hdml9" nodeCondition=["DiskPressure"] Oct 2 19:27:42.821144 kubelet[1416]: I1002 19:27:42.821086 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ng7n6" nodeCondition=["DiskPressure"] Oct 2 19:27:42.870253 kubelet[1416]: I1002 19:27:42.870202 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fblbl" nodeCondition=["DiskPressure"] Oct 2 19:27:42.875377 kubelet[1416]: E1002 19:27:42.875333 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:42.915055 env[1110]: time="2023-10-02T19:27:42.915010177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\"" Oct 2 19:27:42.969880 kubelet[1416]: I1002 19:27:42.969834 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6k8sk" nodeCondition=["DiskPressure"] Oct 2 19:27:43.070181 kubelet[1416]: I1002 19:27:43.070051 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vrmd4" nodeCondition=["DiskPressure"] Oct 2 19:27:43.171844 kubelet[1416]: I1002 19:27:43.171781 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-m68xb" nodeCondition=["DiskPressure"] Oct 2 19:27:43.197967 env[1110]: time="2023-10-02T19:27:43.197860619Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:43.199055 env[1110]: time="2023-10-02T19:27:43.199024914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:43.199251 kubelet[1416]: E1002 19:27:43.199226 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:43.199328 kubelet[1416]: E1002 19:27:43.199266 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/csi:v3.25.0" Oct 2 19:27:43.199362 kubelet[1416]: E1002 19:27:43.199356 1416 kuberuntime_manager.go:1209] container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.25.0,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:etccalico,ReadOnly:false,MountPath:/etc/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/csi:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/csi:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:43.199988 env[1110]: time="2023-10-02T19:27:43.199969196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\"" Oct 2 19:27:43.274447 kubelet[1416]: I1002 19:27:43.274390 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8llqh" nodeCondition=["DiskPressure"] Oct 2 19:27:43.318710 kubelet[1416]: I1002 19:27:43.318648 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-twrf8" nodeCondition=["DiskPressure"] Oct 2 19:27:43.420322 kubelet[1416]: I1002 19:27:43.420261 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gvxdl" nodeCondition=["DiskPressure"] Oct 2 19:27:43.475613 env[1110]: time="2023-10-02T19:27:43.475534754Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io Oct 2 19:27:43.476657 env[1110]: time="2023-10-02T19:27:43.476614430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" Oct 2 19:27:43.476882 kubelet[1416]: E1002 19:27:43.476853 1416 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:43.476975 kubelet[1416]: E1002 19:27:43.476901 1416 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0" Oct 2 19:27:43.477018 kubelet[1416]: E1002 19:27:43.477008 1416 kuberuntime_manager.go:1209] container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9rj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-75kzt_calico-system(b0822001-b43f-4855-b401-678c43b136af): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to resolve reference "ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Oct 2 19:27:43.477141 kubelet[1416]: E1002 19:27:43.477072 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:43.621585 kubelet[1416]: I1002 19:27:43.621538 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2d7dg" nodeCondition=["DiskPressure"] Oct 2 19:27:43.720252 kubelet[1416]: I1002 19:27:43.720102 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bf4t7" nodeCondition=["DiskPressure"] Oct 2 19:27:43.818628 kubelet[1416]: I1002 19:27:43.818579 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2nhxz" nodeCondition=["DiskPressure"] Oct 2 19:27:43.876445 kubelet[1416]: E1002 19:27:43.876404 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:43.920931 kubelet[1416]: I1002 19:27:43.920889 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-86cwz" nodeCondition=["DiskPressure"] Oct 2 19:27:44.023358 kubelet[1416]: I1002 19:27:44.023214 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bfd7p" nodeCondition=["DiskPressure"] Oct 2 19:27:44.119816 kubelet[1416]: I1002 19:27:44.119748 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4thrt" nodeCondition=["DiskPressure"] Oct 2 19:27:44.169592 kubelet[1416]: I1002 19:27:44.169534 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5kct7" nodeCondition=["DiskPressure"] Oct 2 19:27:44.270462 kubelet[1416]: I1002 19:27:44.270408 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9njqb" nodeCondition=["DiskPressure"] Oct 2 19:27:44.471973 kubelet[1416]: I1002 19:27:44.471902 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xwjrk" nodeCondition=["DiskPressure"] Oct 2 19:27:44.571112 kubelet[1416]: I1002 19:27:44.571033 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8p27k" nodeCondition=["DiskPressure"] Oct 2 19:27:44.670499 kubelet[1416]: I1002 19:27:44.670432 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-666ql" nodeCondition=["DiskPressure"] Oct 2 19:27:44.874887 kubelet[1416]: I1002 19:27:44.874824 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-frbvz" nodeCondition=["DiskPressure"] Oct 2 19:27:44.876619 kubelet[1416]: E1002 19:27:44.876573 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.970853 kubelet[1416]: I1002 19:27:44.970455 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fgdqv" nodeCondition=["DiskPressure"] Oct 2 19:27:45.021611 kubelet[1416]: I1002 19:27:45.021567 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tpdv9" nodeCondition=["DiskPressure"] Oct 2 19:27:45.122199 kubelet[1416]: I1002 19:27:45.122117 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lthbd" nodeCondition=["DiskPressure"] Oct 2 19:27:45.223303 kubelet[1416]: I1002 19:27:45.222920 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ht9b4" nodeCondition=["DiskPressure"] Oct 2 19:27:45.270381 kubelet[1416]: I1002 19:27:45.270324 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2kxmg" nodeCondition=["DiskPressure"] Oct 2 19:27:45.370958 kubelet[1416]: I1002 19:27:45.370892 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b8dkt" nodeCondition=["DiskPressure"] Oct 2 19:27:45.472458 kubelet[1416]: I1002 19:27:45.472400 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8wlrt" nodeCondition=["DiskPressure"] Oct 2 19:27:45.572585 kubelet[1416]: I1002 19:27:45.572451 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2rk9c" nodeCondition=["DiskPressure"] Oct 2 19:27:45.671573 kubelet[1416]: I1002 19:27:45.671527 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rn688" nodeCondition=["DiskPressure"] Oct 2 19:27:45.775156 kubelet[1416]: I1002 19:27:45.775098 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p4wpb" nodeCondition=["DiskPressure"] Oct 2 19:27:45.871565 kubelet[1416]: I1002 19:27:45.871507 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g2krh" nodeCondition=["DiskPressure"] Oct 2 19:27:45.877061 kubelet[1416]: E1002 19:27:45.877041 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:45.971730 kubelet[1416]: I1002 19:27:45.971673 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sk4n4" nodeCondition=["DiskPressure"] Oct 2 19:27:46.071033 kubelet[1416]: I1002 19:27:46.070972 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-chd7t" nodeCondition=["DiskPressure"] Oct 2 19:27:46.173310 kubelet[1416]: I1002 19:27:46.173152 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hfk6g" nodeCondition=["DiskPressure"] Oct 2 19:27:46.271561 kubelet[1416]: I1002 19:27:46.271499 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mpjpw" nodeCondition=["DiskPressure"] Oct 2 19:27:46.373032 kubelet[1416]: I1002 19:27:46.372969 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l94dn" nodeCondition=["DiskPressure"] Oct 2 19:27:46.575651 kubelet[1416]: I1002 19:27:46.575497 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6vz46" nodeCondition=["DiskPressure"] Oct 2 19:27:46.671601 kubelet[1416]: I1002 19:27:46.671536 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-86jnc" nodeCondition=["DiskPressure"] Oct 2 19:27:46.770648 kubelet[1416]: I1002 19:27:46.770598 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8zmv4" nodeCondition=["DiskPressure"] Oct 2 19:27:46.877525 kubelet[1416]: E1002 19:27:46.877480 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:46.971328 kubelet[1416]: I1002 19:27:46.971276 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-26wkf" nodeCondition=["DiskPressure"] Oct 2 19:27:47.073508 kubelet[1416]: I1002 19:27:47.073461 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-h4z5l" nodeCondition=["DiskPressure"] Oct 2 19:27:47.091813 systemd[1]: run-containerd-runc-k8s.io-9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4-runc.Oja2xm.mount: Deactivated successfully. Oct 2 19:27:47.171747 kubelet[1416]: I1002 19:27:47.171343 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-s88z8" nodeCondition=["DiskPressure"] Oct 2 19:27:47.273760 kubelet[1416]: I1002 19:27:47.273707 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rqs2t" nodeCondition=["DiskPressure"] Oct 2 19:27:47.372419 kubelet[1416]: I1002 19:27:47.372358 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p5j76" nodeCondition=["DiskPressure"] Oct 2 19:27:47.470881 kubelet[1416]: I1002 19:27:47.470504 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-655lr" nodeCondition=["DiskPressure"] Oct 2 19:27:47.522229 kubelet[1416]: I1002 19:27:47.522175 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w92ll" nodeCondition=["DiskPressure"] Oct 2 19:27:47.626012 kubelet[1416]: I1002 19:27:47.625941 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xcz6s" nodeCondition=["DiskPressure"] Oct 2 19:27:47.720834 kubelet[1416]: I1002 19:27:47.720773 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cl5n7" nodeCondition=["DiskPressure"] Oct 2 19:27:47.821337 kubelet[1416]: I1002 19:27:47.821193 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qbnxk" nodeCondition=["DiskPressure"] Oct 2 19:27:47.878288 kubelet[1416]: E1002 19:27:47.878235 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:47.927413 kubelet[1416]: I1002 19:27:47.927353 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-spbtz" nodeCondition=["DiskPressure"] Oct 2 19:27:48.022122 kubelet[1416]: I1002 19:27:48.022068 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9rn9m" nodeCondition=["DiskPressure"] Oct 2 19:27:48.122974 kubelet[1416]: I1002 19:27:48.122923 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bmptk" nodeCondition=["DiskPressure"] Oct 2 19:27:48.173017 kubelet[1416]: I1002 19:27:48.172960 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-289dc" nodeCondition=["DiskPressure"] Oct 2 19:27:48.271508 kubelet[1416]: I1002 19:27:48.271451 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-266k9" nodeCondition=["DiskPressure"] Oct 2 19:27:48.370391 kubelet[1416]: I1002 19:27:48.370339 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vp274" nodeCondition=["DiskPressure"] Oct 2 19:27:48.470766 kubelet[1416]: I1002 19:27:48.470626 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hqlsf" nodeCondition=["DiskPressure"] Oct 2 19:27:48.618379 kubelet[1416]: I1002 19:27:48.618336 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:48.618379 kubelet[1416]: I1002 19:27:48.618377 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:48.620194 kubelet[1416]: I1002 19:27:48.620172 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:48.629579 kubelet[1416]: I1002 19:27:48.629555 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:48.629723 kubelet[1416]: I1002 19:27:48.629626 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-8c5qr","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629652 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629664 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629673 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629681 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629689 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:48.629723 kubelet[1416]: E1002 19:27:48.629696 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:48.629723 kubelet[1416]: I1002 19:27:48.629706 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:48.671610 kubelet[1416]: I1002 19:27:48.671561 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-plz9q" nodeCondition=["DiskPressure"] Oct 2 19:27:48.776597 kubelet[1416]: I1002 19:27:48.776120 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qgjsh" nodeCondition=["DiskPressure"] Oct 2 19:27:48.821176 kubelet[1416]: I1002 19:27:48.821118 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nnpnv" nodeCondition=["DiskPressure"] Oct 2 19:27:48.879053 kubelet[1416]: E1002 19:27:48.879007 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:48.915543 kubelet[1416]: E1002 19:27:48.915460 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:27:48.925277 kubelet[1416]: I1002 19:27:48.925213 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qkvzj" nodeCondition=["DiskPressure"] Oct 2 19:27:49.026191 kubelet[1416]: I1002 19:27:49.026147 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-skqjh" nodeCondition=["DiskPressure"] Oct 2 19:27:49.070800 kubelet[1416]: I1002 19:27:49.070495 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-snkfn" nodeCondition=["DiskPressure"] Oct 2 19:27:49.172814 kubelet[1416]: I1002 19:27:49.172750 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kjw6p" nodeCondition=["DiskPressure"] Oct 2 19:27:49.373426 kubelet[1416]: I1002 19:27:49.373382 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-874zf" nodeCondition=["DiskPressure"] Oct 2 19:27:49.471434 kubelet[1416]: I1002 19:27:49.471376 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-99v24" nodeCondition=["DiskPressure"] Oct 2 19:27:49.601001 kubelet[1416]: I1002 19:27:49.600944 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ld9hr" nodeCondition=["DiskPressure"] Oct 2 19:27:49.677424 kubelet[1416]: I1002 19:27:49.677278 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sf9m9" nodeCondition=["DiskPressure"] Oct 2 19:27:49.778487 kubelet[1416]: I1002 19:27:49.778435 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4ljbf" nodeCondition=["DiskPressure"] Oct 2 19:27:49.870486 kubelet[1416]: I1002 19:27:49.870439 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ds4td" nodeCondition=["DiskPressure"] Oct 2 19:27:49.879129 kubelet[1416]: E1002 19:27:49.879102 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:49.970391 kubelet[1416]: I1002 19:27:49.970059 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nmds7" nodeCondition=["DiskPressure"] Oct 2 19:27:50.171596 kubelet[1416]: I1002 19:27:50.171550 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7g6zv" nodeCondition=["DiskPressure"] Oct 2 19:27:50.271200 kubelet[1416]: I1002 19:27:50.271077 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cjjvx" nodeCondition=["DiskPressure"] Oct 2 19:27:50.372402 kubelet[1416]: I1002 19:27:50.372355 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qhg8n" nodeCondition=["DiskPressure"] Oct 2 19:27:50.471601 kubelet[1416]: I1002 19:27:50.471544 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f4jsz" nodeCondition=["DiskPressure"] Oct 2 19:27:50.520120 kubelet[1416]: I1002 19:27:50.520054 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fjl9c" nodeCondition=["DiskPressure"] Oct 2 19:27:50.629570 kubelet[1416]: I1002 19:27:50.629510 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ljrw7" nodeCondition=["DiskPressure"] Oct 2 19:27:50.725363 kubelet[1416]: I1002 19:27:50.725298 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mxpw6" nodeCondition=["DiskPressure"] Oct 2 19:27:50.821446 kubelet[1416]: I1002 19:27:50.821390 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rdmf7" nodeCondition=["DiskPressure"] Oct 2 19:27:50.879694 kubelet[1416]: E1002 19:27:50.879575 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:51.026347 kubelet[1416]: I1002 19:27:51.026290 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lg82b" nodeCondition=["DiskPressure"] Oct 2 19:27:51.123312 kubelet[1416]: I1002 19:27:51.123238 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sswpv" nodeCondition=["DiskPressure"] Oct 2 19:27:51.222065 kubelet[1416]: I1002 19:27:51.221923 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7wrcx" nodeCondition=["DiskPressure"] Oct 2 19:27:51.322539 kubelet[1416]: I1002 19:27:51.322488 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ntrkz" nodeCondition=["DiskPressure"] Oct 2 19:27:51.423700 kubelet[1416]: I1002 19:27:51.423638 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8mxpx" nodeCondition=["DiskPressure"] Oct 2 19:27:51.522434 kubelet[1416]: I1002 19:27:51.522166 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jv9w7" nodeCondition=["DiskPressure"] Oct 2 19:27:51.621234 kubelet[1416]: I1002 19:27:51.621189 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gh4qp" nodeCondition=["DiskPressure"] Oct 2 19:27:51.721755 kubelet[1416]: I1002 19:27:51.721695 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wjtzk" nodeCondition=["DiskPressure"] Oct 2 19:27:51.772058 kubelet[1416]: I1002 19:27:51.772008 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rk7vs" nodeCondition=["DiskPressure"] Oct 2 19:27:51.872538 kubelet[1416]: I1002 19:27:51.872490 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5gmnb" nodeCondition=["DiskPressure"] Oct 2 19:27:51.879718 kubelet[1416]: E1002 19:27:51.879684 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:52.073862 kubelet[1416]: I1002 19:27:52.073807 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gctrp" nodeCondition=["DiskPressure"] Oct 2 19:27:52.175558 kubelet[1416]: I1002 19:27:52.175419 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qcpk4" nodeCondition=["DiskPressure"] Oct 2 19:27:52.223245 kubelet[1416]: I1002 19:27:52.223182 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g7ls8" nodeCondition=["DiskPressure"] Oct 2 19:27:52.323554 kubelet[1416]: I1002 19:27:52.323501 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xjr2d" nodeCondition=["DiskPressure"] Oct 2 19:27:52.422425 kubelet[1416]: I1002 19:27:52.422377 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-79ddw" nodeCondition=["DiskPressure"] Oct 2 19:27:52.522682 kubelet[1416]: I1002 19:27:52.522515 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x86rq" nodeCondition=["DiskPressure"] Oct 2 19:27:52.630816 kubelet[1416]: I1002 19:27:52.630746 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kr776" nodeCondition=["DiskPressure"] Oct 2 19:27:52.723249 kubelet[1416]: I1002 19:27:52.723194 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tg577" nodeCondition=["DiskPressure"] Oct 2 19:27:52.822905 kubelet[1416]: I1002 19:27:52.822751 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r7sx7" nodeCondition=["DiskPressure"] Oct 2 19:27:52.880411 kubelet[1416]: E1002 19:27:52.880340 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:52.927236 kubelet[1416]: I1002 19:27:52.927184 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rbwjb" nodeCondition=["DiskPressure"] Oct 2 19:27:53.169105 kubelet[1416]: I1002 19:27:53.169041 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qlbmj" nodeCondition=["DiskPressure"] Oct 2 19:27:53.718506 kubelet[1416]: I1002 19:27:53.718438 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pw5nv" nodeCondition=["DiskPressure"] Oct 2 19:27:53.809113 kubelet[1416]: I1002 19:27:53.809046 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fjjb4" nodeCondition=["DiskPressure"] Oct 2 19:27:53.880815 kubelet[1416]: E1002 19:27:53.880759 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:53.920777 kubelet[1416]: I1002 19:27:53.920725 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xz8xm" nodeCondition=["DiskPressure"] Oct 2 19:27:54.128176 kubelet[1416]: I1002 19:27:54.128107 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vklbr" nodeCondition=["DiskPressure"] Oct 2 19:27:54.161523 kubelet[1416]: I1002 19:27:54.161463 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l6gkc" nodeCondition=["DiskPressure"] Oct 2 19:27:54.187170 kubelet[1416]: I1002 19:27:54.187101 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jrjsm" nodeCondition=["DiskPressure"] Oct 2 19:27:54.214954 kubelet[1416]: I1002 19:27:54.214880 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nlnwq" nodeCondition=["DiskPressure"] Oct 2 19:27:54.279535 kubelet[1416]: I1002 19:27:54.279476 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kpkqn" nodeCondition=["DiskPressure"] Oct 2 19:27:54.373190 kubelet[1416]: I1002 19:27:54.373138 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xnq9w" nodeCondition=["DiskPressure"] Oct 2 19:27:54.474682 kubelet[1416]: I1002 19:27:54.474295 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w8mgf" nodeCondition=["DiskPressure"] Oct 2 19:27:54.526292 kubelet[1416]: I1002 19:27:54.526237 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qzz9n" nodeCondition=["DiskPressure"] Oct 2 19:27:54.623734 kubelet[1416]: I1002 19:27:54.623677 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sckjn" nodeCondition=["DiskPressure"] Oct 2 19:27:54.722608 kubelet[1416]: I1002 19:27:54.722552 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-krv5h" nodeCondition=["DiskPressure"] Oct 2 19:27:54.819683 kubelet[1416]: E1002 19:27:54.819551 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.831635 kubelet[1416]: I1002 19:27:54.831568 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fx5w5" nodeCondition=["DiskPressure"] Oct 2 19:27:54.881285 kubelet[1416]: E1002 19:27:54.881223 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.917088 kubelet[1416]: E1002 19:27:54.917052 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:27:54.923824 kubelet[1416]: I1002 19:27:54.923766 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-b2dlz" nodeCondition=["DiskPressure"] Oct 2 19:27:54.972496 kubelet[1416]: I1002 19:27:54.972436 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zw6k7" nodeCondition=["DiskPressure"] Oct 2 19:27:55.074619 kubelet[1416]: I1002 19:27:55.074484 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sfg5f" nodeCondition=["DiskPressure"] Oct 2 19:27:55.173574 kubelet[1416]: I1002 19:27:55.173513 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p48tn" nodeCondition=["DiskPressure"] Oct 2 19:27:55.274420 kubelet[1416]: I1002 19:27:55.274356 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xttk8" nodeCondition=["DiskPressure"] Oct 2 19:27:55.373556 kubelet[1416]: I1002 19:27:55.373507 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6wcj6" nodeCondition=["DiskPressure"] Oct 2 19:27:55.475095 kubelet[1416]: I1002 19:27:55.475044 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p5dvg" nodeCondition=["DiskPressure"] Oct 2 19:27:55.572732 kubelet[1416]: I1002 19:27:55.572669 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hjsf5" nodeCondition=["DiskPressure"] Oct 2 19:27:55.674579 kubelet[1416]: I1002 19:27:55.674273 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bc7zc" nodeCondition=["DiskPressure"] Oct 2 19:27:55.773647 kubelet[1416]: I1002 19:27:55.773586 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-t7bqs" nodeCondition=["DiskPressure"] Oct 2 19:27:55.874053 kubelet[1416]: I1002 19:27:55.873999 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wl2kr" nodeCondition=["DiskPressure"] Oct 2 19:27:55.881808 kubelet[1416]: E1002 19:27:55.881766 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:55.975431 kubelet[1416]: I1002 19:27:55.975258 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sr672" nodeCondition=["DiskPressure"] Oct 2 19:27:56.084535 kubelet[1416]: I1002 19:27:56.084447 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-84fjr" nodeCondition=["DiskPressure"] Oct 2 19:27:56.273366 kubelet[1416]: I1002 19:27:56.273218 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bxgjh" nodeCondition=["DiskPressure"] Oct 2 19:27:56.376013 kubelet[1416]: I1002 19:27:56.375944 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4257x" nodeCondition=["DiskPressure"] Oct 2 19:27:56.475945 kubelet[1416]: I1002 19:27:56.475888 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-q5vfj" nodeCondition=["DiskPressure"] Oct 2 19:27:56.573937 kubelet[1416]: I1002 19:27:56.573628 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rtfhf" nodeCondition=["DiskPressure"] Oct 2 19:27:56.673779 kubelet[1416]: I1002 19:27:56.673705 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-njxz4" nodeCondition=["DiskPressure"] Oct 2 19:27:56.773928 kubelet[1416]: I1002 19:27:56.773878 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cxcsk" nodeCondition=["DiskPressure"] Oct 2 19:27:56.822369 kubelet[1416]: I1002 19:27:56.822303 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-flg4n" nodeCondition=["DiskPressure"] Oct 2 19:27:56.882007 kubelet[1416]: E1002 19:27:56.881953 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:56.923298 kubelet[1416]: I1002 19:27:56.923230 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-56qpc" nodeCondition=["DiskPressure"] Oct 2 19:27:57.024472 kubelet[1416]: I1002 19:27:57.024421 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qmqqs" nodeCondition=["DiskPressure"] Oct 2 19:27:57.128537 kubelet[1416]: I1002 19:27:57.128475 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-c8h2q" nodeCondition=["DiskPressure"] Oct 2 19:27:57.223927 kubelet[1416]: I1002 19:27:57.223770 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-44xhx" nodeCondition=["DiskPressure"] Oct 2 19:27:57.322681 kubelet[1416]: I1002 19:27:57.322626 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k7rq4" nodeCondition=["DiskPressure"] Oct 2 19:27:57.529923 kubelet[1416]: I1002 19:27:57.529470 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z8qvl" nodeCondition=["DiskPressure"] Oct 2 19:27:57.623276 kubelet[1416]: I1002 19:27:57.623224 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-l9kxc" nodeCondition=["DiskPressure"] Oct 2 19:27:57.723397 kubelet[1416]: I1002 19:27:57.723322 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nj4mq" nodeCondition=["DiskPressure"] Oct 2 19:27:57.823191 kubelet[1416]: I1002 19:27:57.823048 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4688q" nodeCondition=["DiskPressure"] Oct 2 19:27:57.873629 kubelet[1416]: I1002 19:27:57.873543 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kpjjl" nodeCondition=["DiskPressure"] Oct 2 19:27:57.882633 kubelet[1416]: E1002 19:27:57.882599 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:57.973082 kubelet[1416]: I1002 19:27:57.973026 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-cl58r" nodeCondition=["DiskPressure"] Oct 2 19:27:58.080321 kubelet[1416]: I1002 19:27:58.080185 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wcpfc" nodeCondition=["DiskPressure"] Oct 2 19:27:58.177695 kubelet[1416]: I1002 19:27:58.177627 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-tfnn5" nodeCondition=["DiskPressure"] Oct 2 19:27:58.273639 kubelet[1416]: I1002 19:27:58.273572 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-c5dph" nodeCondition=["DiskPressure"] Oct 2 19:27:58.330548 kubelet[1416]: I1002 19:27:58.330399 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rbhv8" nodeCondition=["DiskPressure"] Oct 2 19:27:58.423363 kubelet[1416]: I1002 19:27:58.423315 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9zszp" nodeCondition=["DiskPressure"] Oct 2 19:27:58.525218 kubelet[1416]: I1002 19:27:58.525145 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7v8rf" nodeCondition=["DiskPressure"] Oct 2 19:27:58.624487 kubelet[1416]: I1002 19:27:58.624426 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-77tv7" nodeCondition=["DiskPressure"] Oct 2 19:27:58.646054 kubelet[1416]: I1002 19:27:58.646015 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:58.646054 kubelet[1416]: I1002 19:27:58.646059 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:27:58.647480 kubelet[1416]: I1002 19:27:58.647460 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:27:58.657462 kubelet[1416]: I1002 19:27:58.657436 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:27:58.657542 kubelet[1416]: I1002 19:27:58.657524 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:27:58.657571 kubelet[1416]: E1002 19:27:58.657552 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:27:58.657571 kubelet[1416]: E1002 19:27:58.657565 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:27:58.657630 kubelet[1416]: E1002 19:27:58.657575 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:27:58.657630 kubelet[1416]: E1002 19:27:58.657588 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:27:58.657630 kubelet[1416]: E1002 19:27:58.657598 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:27:58.657630 kubelet[1416]: E1002 19:27:58.657607 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:27:58.657630 kubelet[1416]: I1002 19:27:58.657617 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:27:58.824448 kubelet[1416]: I1002 19:27:58.824388 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jfnm6" nodeCondition=["DiskPressure"] Oct 2 19:27:58.883015 kubelet[1416]: E1002 19:27:58.882870 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:58.924558 kubelet[1416]: I1002 19:27:58.924517 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gzwgf" nodeCondition=["DiskPressure"] Oct 2 19:27:59.031857 kubelet[1416]: I1002 19:27:59.031779 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zx6dz" nodeCondition=["DiskPressure"] Oct 2 19:27:59.123313 kubelet[1416]: I1002 19:27:59.123255 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mbxht" nodeCondition=["DiskPressure"] Oct 2 19:27:59.224935 kubelet[1416]: I1002 19:27:59.224775 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xbm74" nodeCondition=["DiskPressure"] Oct 2 19:27:59.328485 kubelet[1416]: I1002 19:27:59.328418 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xmx4p" nodeCondition=["DiskPressure"] Oct 2 19:27:59.426308 kubelet[1416]: I1002 19:27:59.426255 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r5mzf" nodeCondition=["DiskPressure"] Oct 2 19:27:59.524873 kubelet[1416]: I1002 19:27:59.524726 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lrrz9" nodeCondition=["DiskPressure"] Oct 2 19:27:59.634715 kubelet[1416]: I1002 19:27:59.634657 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ckmk2" nodeCondition=["DiskPressure"] Oct 2 19:27:59.723948 kubelet[1416]: I1002 19:27:59.723892 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-llf9m" nodeCondition=["DiskPressure"] Oct 2 19:27:59.825666 kubelet[1416]: I1002 19:27:59.825529 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8kxd9" nodeCondition=["DiskPressure"] Oct 2 19:27:59.883952 kubelet[1416]: E1002 19:27:59.883867 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:59.925226 kubelet[1416]: I1002 19:27:59.925185 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nzf2w" nodeCondition=["DiskPressure"] Oct 2 19:27:59.972565 kubelet[1416]: I1002 19:27:59.972498 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jldx6" nodeCondition=["DiskPressure"] Oct 2 19:28:00.075153 kubelet[1416]: I1002 19:28:00.075084 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mvt5j" nodeCondition=["DiskPressure"] Oct 2 19:28:00.274101 kubelet[1416]: I1002 19:28:00.274030 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fnbfc" nodeCondition=["DiskPressure"] Oct 2 19:28:00.375116 kubelet[1416]: I1002 19:28:00.375060 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2zhzc" nodeCondition=["DiskPressure"] Oct 2 19:28:00.473978 kubelet[1416]: I1002 19:28:00.473912 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nkg74" nodeCondition=["DiskPressure"] Oct 2 19:28:00.575504 kubelet[1416]: I1002 19:28:00.575366 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5ln49" nodeCondition=["DiskPressure"] Oct 2 19:28:00.674434 kubelet[1416]: I1002 19:28:00.674370 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2bhcz" nodeCondition=["DiskPressure"] Oct 2 19:28:00.773724 kubelet[1416]: I1002 19:28:00.773648 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bk8d8" nodeCondition=["DiskPressure"] Oct 2 19:28:00.877156 kubelet[1416]: I1002 19:28:00.877074 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6drfb" nodeCondition=["DiskPressure"] Oct 2 19:28:00.884348 kubelet[1416]: E1002 19:28:00.884267 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:01.000846 kubelet[1416]: I1002 19:28:01.000736 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dc5ft" nodeCondition=["DiskPressure"] Oct 2 19:28:01.147052 kubelet[1416]: I1002 19:28:01.146473 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rhnhd" nodeCondition=["DiskPressure"] Oct 2 19:28:01.303241 kubelet[1416]: I1002 19:28:01.301922 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-6bxc4" nodeCondition=["DiskPressure"] Oct 2 19:28:01.453217 kubelet[1416]: I1002 19:28:01.452430 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-bscgw" nodeCondition=["DiskPressure"] Oct 2 19:28:01.637603 kubelet[1416]: I1002 19:28:01.618621 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hndpp" nodeCondition=["DiskPressure"] Oct 2 19:28:01.782239 kubelet[1416]: I1002 19:28:01.782044 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gn876" nodeCondition=["DiskPressure"] Oct 2 19:28:01.886587 kubelet[1416]: E1002 19:28:01.886497 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:01.903937 kubelet[1416]: I1002 19:28:01.902587 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ddtgs" nodeCondition=["DiskPressure"] Oct 2 19:28:02.074959 kubelet[1416]: I1002 19:28:02.065893 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-2tptj" nodeCondition=["DiskPressure"] Oct 2 19:28:02.211186 kubelet[1416]: I1002 19:28:02.210520 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gj7vc" nodeCondition=["DiskPressure"] Oct 2 19:28:02.391532 kubelet[1416]: I1002 19:28:02.391414 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-9kq55" nodeCondition=["DiskPressure"] Oct 2 19:28:02.533684 kubelet[1416]: I1002 19:28:02.533606 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7c7vd" nodeCondition=["DiskPressure"] Oct 2 19:28:02.855721 kubelet[1416]: I1002 19:28:02.852261 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-z5vtc" nodeCondition=["DiskPressure"] Oct 2 19:28:02.889882 kubelet[1416]: E1002 19:28:02.889699 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:03.001431 kubelet[1416]: I1002 19:28:03.000733 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-sdpgk" nodeCondition=["DiskPressure"] Oct 2 19:28:03.141834 kubelet[1416]: I1002 19:28:03.137562 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5vc9g" nodeCondition=["DiskPressure"] Oct 2 19:28:03.255548 kubelet[1416]: I1002 19:28:03.254244 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dzws2" nodeCondition=["DiskPressure"] Oct 2 19:28:03.394677 kubelet[1416]: I1002 19:28:03.390736 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4z8fn" nodeCondition=["DiskPressure"] Oct 2 19:28:03.618326 kubelet[1416]: I1002 19:28:03.618220 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-v2l6p" nodeCondition=["DiskPressure"] Oct 2 19:28:03.729057 kubelet[1416]: I1002 19:28:03.728642 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g8vx6" nodeCondition=["DiskPressure"] Oct 2 19:28:03.811520 kubelet[1416]: I1002 19:28:03.810591 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jzph7" nodeCondition=["DiskPressure"] Oct 2 19:28:03.900591 kubelet[1416]: E1002 19:28:03.897319 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.013872 kubelet[1416]: I1002 19:28:04.010112 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-ks2tn" nodeCondition=["DiskPressure"] Oct 2 19:28:04.132236 kubelet[1416]: I1002 19:28:04.132120 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-d9hsk" nodeCondition=["DiskPressure"] Oct 2 19:28:04.397474 kubelet[1416]: I1002 19:28:04.392923 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-55qn8" nodeCondition=["DiskPressure"] Oct 2 19:28:04.624638 kubelet[1416]: I1002 19:28:04.624546 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dbzbx" nodeCondition=["DiskPressure"] Oct 2 19:28:04.757269 kubelet[1416]: I1002 19:28:04.755583 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lx8ql" nodeCondition=["DiskPressure"] Oct 2 19:28:04.861079 kubelet[1416]: I1002 19:28:04.860997 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-zvvkz" nodeCondition=["DiskPressure"] Oct 2 19:28:04.910801 kubelet[1416]: E1002 19:28:04.910715 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:05.058597 kubelet[1416]: I1002 19:28:05.039372 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-f9x6l" nodeCondition=["DiskPressure"] Oct 2 19:28:05.228328 kubelet[1416]: I1002 19:28:05.228186 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-r9cnn" nodeCondition=["DiskPressure"] Oct 2 19:28:05.336409 kubelet[1416]: I1002 19:28:05.335672 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4bqf2" nodeCondition=["DiskPressure"] Oct 2 19:28:05.531613 systemd[1]: run-containerd-runc-k8s.io-a1ab22a1c7a565aebb100b2558947802494f129a6d5c29943d03aefc4b2f83d3-runc.88pwI6.mount: Deactivated successfully. Oct 2 19:28:05.569843 kubelet[1416]: I1002 19:28:05.566719 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w7m45" nodeCondition=["DiskPressure"] Oct 2 19:28:05.700151 kubelet[1416]: I1002 19:28:05.698162 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-phsgw" nodeCondition=["DiskPressure"] Oct 2 19:28:05.859685 kubelet[1416]: I1002 19:28:05.847808 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p52wg" nodeCondition=["DiskPressure"] Oct 2 19:28:05.913352 kubelet[1416]: E1002 19:28:05.913052 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:06.011264 kubelet[1416]: I1002 19:28:05.997140 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-5jdxn" nodeCondition=["DiskPressure"] Oct 2 19:28:06.185999 kubelet[1416]: I1002 19:28:06.165551 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8g94x" nodeCondition=["DiskPressure"] Oct 2 19:28:06.271663 kubelet[1416]: I1002 19:28:06.270908 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-97jkr" nodeCondition=["DiskPressure"] Oct 2 19:28:06.465602 kubelet[1416]: I1002 19:28:06.441934 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nl82g" nodeCondition=["DiskPressure"] Oct 2 19:28:06.606665 kubelet[1416]: I1002 19:28:06.591455 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-dbwml" nodeCondition=["DiskPressure"] Oct 2 19:28:06.754286 kubelet[1416]: I1002 19:28:06.740928 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pj2rz" nodeCondition=["DiskPressure"] Oct 2 19:28:06.893780 kubelet[1416]: I1002 19:28:06.892271 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-kx55b" nodeCondition=["DiskPressure"] Oct 2 19:28:06.918616 kubelet[1416]: E1002 19:28:06.915098 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:06.918616 kubelet[1416]: E1002 19:28:06.916000 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:28:07.063362 kubelet[1416]: I1002 19:28:07.049047 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-j4qpf" nodeCondition=["DiskPressure"] Oct 2 19:28:07.088854 systemd[1]: run-containerd-runc-k8s.io-9db7039b87de3b7e93dfd9bda75d9cd6a70bccf86a9f9d5607882e16d9f6eac4-runc.oShNMC.mount: Deactivated successfully. Oct 2 19:28:07.201667 kubelet[1416]: I1002 19:28:07.201234 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-mcvfq" nodeCondition=["DiskPressure"] Oct 2 19:28:07.340496 kubelet[1416]: I1002 19:28:07.338495 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-hvcjl" nodeCondition=["DiskPressure"] Oct 2 19:28:07.505048 kubelet[1416]: I1002 19:28:07.504185 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-g9qsz" nodeCondition=["DiskPressure"] Oct 2 19:28:07.633943 kubelet[1416]: I1002 19:28:07.633836 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7j98h" nodeCondition=["DiskPressure"] Oct 2 19:28:07.765232 kubelet[1416]: I1002 19:28:07.763925 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nbdbp" nodeCondition=["DiskPressure"] Oct 2 19:28:07.867866 kubelet[1416]: I1002 19:28:07.867502 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-js8kg" nodeCondition=["DiskPressure"] Oct 2 19:28:07.915822 kubelet[1416]: E1002 19:28:07.915670 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:08.128006 kubelet[1416]: I1002 19:28:08.125366 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vtpd8" nodeCondition=["DiskPressure"] Oct 2 19:28:08.272682 kubelet[1416]: I1002 19:28:08.269203 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-w64gj" nodeCondition=["DiskPressure"] Oct 2 19:28:08.554993 kubelet[1416]: I1002 19:28:08.553809 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-8r7fc" nodeCondition=["DiskPressure"] Oct 2 19:28:08.698518 kubelet[1416]: I1002 19:28:08.698452 1416 eviction_manager.go:342] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Oct 2 19:28:08.698518 kubelet[1416]: I1002 19:28:08.698506 1416 container_gc.go:86] "Attempting to delete unused containers" Oct 2 19:28:08.701015 kubelet[1416]: I1002 19:28:08.700852 1416 image_gc_manager.go:340] "Attempting to delete unused images" Oct 2 19:28:08.744589 kubelet[1416]: I1002 19:28:08.744124 1416 eviction_manager.go:353] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Oct 2 19:28:08.744589 kubelet[1416]: I1002 19:28:08.744272 1416 eviction_manager.go:371] "Eviction manager: pods ranked for eviction" pods=["calico-system/csi-node-driver-75kzt","kube-system/coredns-5dd5756b68-kq6xj","calico-system/calico-kube-controllers-74b9887bb6-g8t2d","kube-system/coredns-5dd5756b68-8c5qr","calico-system/calico-node-6pn5j","kube-system/kube-proxy-x6vv7"] Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744321 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-75kzt" Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744342 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-kq6xj" Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744371 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-74b9887bb6-g8t2d" Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744388 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-5dd5756b68-8c5qr" Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744503 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-6pn5j" Oct 2 19:28:08.744589 kubelet[1416]: E1002 19:28:08.744520 1416 eviction_manager.go:574] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-x6vv7" Oct 2 19:28:08.744589 kubelet[1416]: I1002 19:28:08.744548 1416 eviction_manager.go:403] "Eviction manager: unable to evict any pods from the node" Oct 2 19:28:08.753031 kubelet[1416]: I1002 19:28:08.751725 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-k9znd" nodeCondition=["DiskPressure"] Oct 2 19:28:08.926756 kubelet[1416]: I1002 19:28:08.901680 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-rc64j" nodeCondition=["DiskPressure"] Oct 2 19:28:08.926756 kubelet[1416]: E1002 19:28:08.915961 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:08.926756 kubelet[1416]: E1002 19:28:08.923215 1416 pod_workers.go:1300] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.25.0\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.25.0\\\"\"]" pod="calico-system/csi-node-driver-75kzt" podUID="b0822001-b43f-4855-b401-678c43b136af" Oct 2 19:28:09.093630 kubelet[1416]: I1002 19:28:09.093543 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-jsz5f" nodeCondition=["DiskPressure"] Oct 2 19:28:09.225402 kubelet[1416]: I1002 19:28:09.223956 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-fgp77" nodeCondition=["DiskPressure"] Oct 2 19:28:09.400235 kubelet[1416]: I1002 19:28:09.399136 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-4fbn7" nodeCondition=["DiskPressure"] Oct 2 19:28:09.590775 kubelet[1416]: I1002 19:28:09.579852 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-gcwx5" nodeCondition=["DiskPressure"] Oct 2 19:28:09.700603 kubelet[1416]: I1002 19:28:09.697627 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-p2njk" nodeCondition=["DiskPressure"] Oct 2 19:28:09.849129 kubelet[1416]: I1002 19:28:09.848977 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-lmsmb" nodeCondition=["DiskPressure"] Oct 2 19:28:09.922500 kubelet[1416]: E1002 19:28:09.921962 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:09.973700 kubelet[1416]: I1002 19:28:09.972887 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-njb7h" nodeCondition=["DiskPressure"] Oct 2 19:28:10.066627 kubelet[1416]: I1002 19:28:10.066083 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-pq9hs" nodeCondition=["DiskPressure"] Oct 2 19:28:10.216766 kubelet[1416]: I1002 19:28:10.214958 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vj2mk" nodeCondition=["DiskPressure"] Oct 2 19:28:10.387901 kubelet[1416]: I1002 19:28:10.373425 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-7zck6" nodeCondition=["DiskPressure"] Oct 2 19:28:10.513697 kubelet[1416]: I1002 19:28:10.510130 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-c7hwh" nodeCondition=["DiskPressure"] Oct 2 19:28:10.696952 kubelet[1416]: I1002 19:28:10.695026 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-qdnlx" nodeCondition=["DiskPressure"] Oct 2 19:28:10.814730 kubelet[1416]: I1002 19:28:10.800960 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-wlkqc" nodeCondition=["DiskPressure"] Oct 2 19:28:10.939642 kubelet[1416]: E1002 19:28:10.925310 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:10.939642 kubelet[1416]: I1002 19:28:10.938562 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-xkszg" nodeCondition=["DiskPressure"] Oct 2 19:28:11.119683 kubelet[1416]: I1002 19:28:11.119616 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-vmfh8" nodeCondition=["DiskPressure"] Oct 2 19:28:11.242335 kubelet[1416]: I1002 19:28:11.237597 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-x6xmh" nodeCondition=["DiskPressure"] Oct 2 19:28:11.351083 kubelet[1416]: I1002 19:28:11.349942 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-nn4rk" nodeCondition=["DiskPressure"] Oct 2 19:28:11.463824 kubelet[1416]: I1002 19:28:11.459990 1416 eviction_manager.go:170] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-8547bd6cc6-67hmw" nodeCondition=["DiskPressure"]