Oct 31 01:19:59.019356 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Oct 30 23:32:41 -00 2025 Oct 31 01:19:59.019405 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:19:59.019425 kernel: BIOS-provided physical RAM map: Oct 31 01:19:59.019439 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 01:19:59.019448 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 01:19:59.019462 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 01:19:59.019476 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 01:19:59.019486 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 01:19:59.019500 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 31 01:19:59.019516 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 31 01:19:59.019526 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 31 01:19:59.019540 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 31 01:19:59.019553 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 31 01:19:59.019563 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 01:19:59.019579 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 31 01:19:59.019598 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 31 01:19:59.019611 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 01:19:59.019621 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 01:19:59.019637 kernel: NX (Execute Disable) protection: active Oct 31 01:19:59.019653 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Oct 31 01:19:59.019665 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Oct 31 01:19:59.019679 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Oct 31 01:19:59.019691 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Oct 31 01:19:59.019703 kernel: extended physical RAM map: Oct 31 01:19:59.019717 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 01:19:59.019734 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 01:19:59.019746 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 01:19:59.019760 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 01:19:59.019773 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 01:19:59.019785 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 31 01:19:59.019798 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 31 01:19:59.019811 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Oct 31 01:19:59.019823 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Oct 31 01:19:59.019836 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Oct 31 01:19:59.019849 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Oct 31 01:19:59.019859 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Oct 31 01:19:59.019877 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 31 01:19:59.019892 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 31 01:19:59.019904 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 01:19:59.019915 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 31 01:19:59.019935 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 31 01:19:59.019951 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 01:19:59.019965 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 01:19:59.019985 kernel: efi: EFI v2.70 by EDK II Oct 31 01:19:59.019998 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Oct 31 01:19:59.020013 kernel: random: crng init done Oct 31 01:19:59.020028 kernel: SMBIOS 2.8 present. Oct 31 01:19:59.020039 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 31 01:19:59.020054 kernel: Hypervisor detected: KVM Oct 31 01:19:59.020067 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 01:19:59.020087 kernel: kvm-clock: cpu 0, msr e1a0001, primary cpu clock Oct 31 01:19:59.020116 kernel: kvm-clock: using sched offset of 5742465325 cycles Oct 31 01:19:59.020150 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 01:19:59.020168 kernel: tsc: Detected 2794.748 MHz processor Oct 31 01:19:59.020185 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 01:19:59.020198 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 01:19:59.020212 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 31 01:19:59.020225 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 01:19:59.020240 kernel: Using GB pages for direct mapping Oct 31 01:19:59.020258 kernel: Secure boot disabled Oct 31 01:19:59.020274 kernel: ACPI: Early table checksum verification disabled Oct 31 01:19:59.020293 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 31 01:19:59.020318 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 31 01:19:59.020334 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020345 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020354 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 31 01:19:59.020393 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020403 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020420 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020437 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:19:59.020457 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 31 01:19:59.020471 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 31 01:19:59.020487 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 31 01:19:59.020502 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 31 01:19:59.020518 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 31 01:19:59.020534 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 31 01:19:59.020549 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 31 01:19:59.020565 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 31 01:19:59.020580 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 31 01:19:59.020600 kernel: No NUMA configuration found Oct 31 01:19:59.020616 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 31 01:19:59.020633 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 31 01:19:59.020647 kernel: Zone ranges: Oct 31 01:19:59.020664 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 01:19:59.020679 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 31 01:19:59.020692 kernel: Normal empty Oct 31 01:19:59.020709 kernel: Movable zone start for each node Oct 31 01:19:59.020723 kernel: Early memory node ranges Oct 31 01:19:59.020743 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 31 01:19:59.020760 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 31 01:19:59.020776 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 31 01:19:59.020791 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 31 01:19:59.020807 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 31 01:19:59.020822 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 31 01:19:59.020838 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 31 01:19:59.020854 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:19:59.020868 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 31 01:19:59.020885 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 31 01:19:59.020905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:19:59.020919 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 31 01:19:59.020936 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 31 01:19:59.020951 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 31 01:19:59.020965 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 01:19:59.020982 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 01:19:59.020998 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 01:19:59.021012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 01:19:59.021028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 01:19:59.021048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 01:19:59.021061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 01:19:59.021078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 01:19:59.021095 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 01:19:59.021109 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 01:19:59.021138 kernel: TSC deadline timer available Oct 31 01:19:59.021150 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 01:19:59.021166 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 01:19:59.021182 kernel: kvm-guest: setup PV sched yield Oct 31 01:19:59.021202 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 31 01:19:59.021219 kernel: Booting paravirtualized kernel on KVM Oct 31 01:19:59.021244 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 01:19:59.021264 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 31 01:19:59.021282 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Oct 31 01:19:59.021296 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Oct 31 01:19:59.021313 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 01:19:59.021330 kernel: kvm-guest: setup async PF for cpu 0 Oct 31 01:19:59.021345 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Oct 31 01:19:59.021377 kernel: kvm-guest: PV spinlocks enabled Oct 31 01:19:59.021394 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 01:19:59.021410 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 31 01:19:59.021430 kernel: Policy zone: DMA32 Oct 31 01:19:59.021448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:19:59.021466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 01:19:59.021483 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 01:19:59.021503 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 01:19:59.021520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 01:19:59.021538 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 169308K reserved, 0K cma-reserved) Oct 31 01:19:59.021556 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 01:19:59.021570 kernel: ftrace: allocating 34614 entries in 136 pages Oct 31 01:19:59.021587 kernel: ftrace: allocated 136 pages with 2 groups Oct 31 01:19:59.021605 kernel: rcu: Hierarchical RCU implementation. Oct 31 01:19:59.021621 kernel: rcu: RCU event tracing is enabled. Oct 31 01:19:59.021638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 01:19:59.021660 kernel: Rude variant of Tasks RCU enabled. Oct 31 01:19:59.021677 kernel: Tracing variant of Tasks RCU enabled. Oct 31 01:19:59.021692 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 01:19:59.021709 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 01:19:59.021727 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 01:19:59.021741 kernel: Console: colour dummy device 80x25 Oct 31 01:19:59.021758 kernel: printk: console [ttyS0] enabled Oct 31 01:19:59.021775 kernel: ACPI: Core revision 20210730 Oct 31 01:19:59.021790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 01:19:59.021810 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 01:19:59.021827 kernel: x2apic enabled Oct 31 01:19:59.021842 kernel: Switched APIC routing to physical x2apic. Oct 31 01:19:59.021858 kernel: kvm-guest: setup PV IPIs Oct 31 01:19:59.021875 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 01:19:59.021890 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 01:19:59.021907 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 01:19:59.021924 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 01:19:59.021939 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 01:19:59.021958 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 01:19:59.021976 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 01:19:59.021991 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 01:19:59.022009 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 01:19:59.022025 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 01:19:59.022040 kernel: active return thunk: retbleed_return_thunk Oct 31 01:19:59.022060 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 01:19:59.022079 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 01:19:59.022095 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 31 01:19:59.022116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 01:19:59.022175 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 01:19:59.022186 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 01:19:59.022194 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 01:19:59.022201 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 31 01:19:59.022208 kernel: Freeing SMP alternatives memory: 32K Oct 31 01:19:59.022215 kernel: pid_max: default: 32768 minimum: 301 Oct 31 01:19:59.022222 kernel: LSM: Security Framework initializing Oct 31 01:19:59.022229 kernel: SELinux: Initializing. Oct 31 01:19:59.022239 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 01:19:59.022246 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 01:19:59.022253 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 01:19:59.022260 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 01:19:59.022267 kernel: ... version: 0 Oct 31 01:19:59.022274 kernel: ... bit width: 48 Oct 31 01:19:59.022281 kernel: ... generic registers: 6 Oct 31 01:19:59.022288 kernel: ... value mask: 0000ffffffffffff Oct 31 01:19:59.022295 kernel: ... max period: 00007fffffffffff Oct 31 01:19:59.022303 kernel: ... fixed-purpose events: 0 Oct 31 01:19:59.022310 kernel: ... event mask: 000000000000003f Oct 31 01:19:59.022317 kernel: signal: max sigframe size: 1776 Oct 31 01:19:59.022324 kernel: rcu: Hierarchical SRCU implementation. Oct 31 01:19:59.022331 kernel: smp: Bringing up secondary CPUs ... Oct 31 01:19:59.022338 kernel: x86: Booting SMP configuration: Oct 31 01:19:59.022345 kernel: .... node #0, CPUs: #1 Oct 31 01:19:59.022352 kernel: kvm-clock: cpu 1, msr e1a0041, secondary cpu clock Oct 31 01:19:59.022371 kernel: kvm-guest: setup async PF for cpu 1 Oct 31 01:19:59.022379 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Oct 31 01:19:59.022386 kernel: #2 Oct 31 01:19:59.022393 kernel: kvm-clock: cpu 2, msr e1a0081, secondary cpu clock Oct 31 01:19:59.022401 kernel: kvm-guest: setup async PF for cpu 2 Oct 31 01:19:59.022408 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Oct 31 01:19:59.022414 kernel: #3 Oct 31 01:19:59.022421 kernel: kvm-clock: cpu 3, msr e1a00c1, secondary cpu clock Oct 31 01:19:59.022428 kernel: kvm-guest: setup async PF for cpu 3 Oct 31 01:19:59.022435 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Oct 31 01:19:59.022442 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 01:19:59.022450 kernel: smpboot: Max logical packages: 1 Oct 31 01:19:59.022457 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 01:19:59.022464 kernel: devtmpfs: initialized Oct 31 01:19:59.022471 kernel: x86/mm: Memory block size: 128MB Oct 31 01:19:59.022478 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 31 01:19:59.022485 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 31 01:19:59.022492 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 31 01:19:59.022499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 31 01:19:59.022506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 31 01:19:59.022514 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 01:19:59.022521 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 01:19:59.022528 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 01:19:59.022535 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 01:19:59.022542 kernel: audit: initializing netlink subsys (disabled) Oct 31 01:19:59.022549 kernel: audit: type=2000 audit(1761873598.242:1): state=initialized audit_enabled=0 res=1 Oct 31 01:19:59.022556 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 01:19:59.022563 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 01:19:59.022571 kernel: cpuidle: using governor menu Oct 31 01:19:59.022578 kernel: ACPI: bus type PCI registered Oct 31 01:19:59.022585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 01:19:59.022592 kernel: dca service started, version 1.12.1 Oct 31 01:19:59.022599 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 01:19:59.022606 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Oct 31 01:19:59.022613 kernel: PCI: Using configuration type 1 for base access Oct 31 01:19:59.022620 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 01:19:59.022627 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 01:19:59.022636 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 01:19:59.022643 kernel: ACPI: Added _OSI(Module Device) Oct 31 01:19:59.022650 kernel: ACPI: Added _OSI(Processor Device) Oct 31 01:19:59.022657 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 01:19:59.022664 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 01:19:59.022671 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 01:19:59.022678 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 01:19:59.022685 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 01:19:59.022692 kernel: ACPI: Interpreter enabled Oct 31 01:19:59.022698 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 01:19:59.022707 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 01:19:59.022714 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 01:19:59.022721 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 01:19:59.022728 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 01:19:59.022840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 01:19:59.022911 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 01:19:59.022978 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 01:19:59.022989 kernel: PCI host bridge to bus 0000:00 Oct 31 01:19:59.023064 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 01:19:59.023135 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 01:19:59.023198 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 01:19:59.023258 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 01:19:59.023318 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 01:19:59.023390 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 31 01:19:59.023454 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 01:19:59.023531 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 01:19:59.023608 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 01:19:59.023676 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 31 01:19:59.023742 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 31 01:19:59.023807 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 31 01:19:59.023875 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 31 01:19:59.023941 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 01:19:59.024014 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 01:19:59.024085 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 31 01:19:59.024167 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 31 01:19:59.024241 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 31 01:19:59.024320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 01:19:59.024405 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 31 01:19:59.024473 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 31 01:19:59.024539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 31 01:19:59.024613 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 01:19:59.024680 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 31 01:19:59.024748 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 31 01:19:59.024814 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 31 01:19:59.024884 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 31 01:19:59.024957 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 01:19:59.025024 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 01:19:59.025101 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 01:19:59.025187 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 31 01:19:59.025255 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 31 01:19:59.025327 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 01:19:59.025411 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 31 01:19:59.025421 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 01:19:59.025429 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 01:19:59.025436 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 01:19:59.025443 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 01:19:59.025451 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 01:19:59.025458 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 01:19:59.025466 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 01:19:59.025475 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 01:19:59.025482 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 01:19:59.025490 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 01:19:59.025497 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 01:19:59.025504 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 01:19:59.025511 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 01:19:59.025518 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 01:19:59.025525 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 01:19:59.025531 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 01:19:59.025540 kernel: iommu: Default domain type: Translated Oct 31 01:19:59.025547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 01:19:59.025615 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 01:19:59.025682 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 01:19:59.025748 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 01:19:59.025757 kernel: vgaarb: loaded Oct 31 01:19:59.025764 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 01:19:59.025771 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 01:19:59.025780 kernel: PTP clock support registered Oct 31 01:19:59.025787 kernel: Registered efivars operations Oct 31 01:19:59.025795 kernel: PCI: Using ACPI for IRQ routing Oct 31 01:19:59.025802 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 01:19:59.025809 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 31 01:19:59.025816 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 31 01:19:59.025823 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Oct 31 01:19:59.025829 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Oct 31 01:19:59.025836 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 31 01:19:59.025844 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 31 01:19:59.025852 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 01:19:59.025859 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 01:19:59.025866 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 01:19:59.025873 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 01:19:59.025881 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 01:19:59.025888 kernel: pnp: PnP ACPI init Oct 31 01:19:59.025961 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 01:19:59.025973 kernel: pnp: PnP ACPI: found 6 devices Oct 31 01:19:59.025980 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 01:19:59.025987 kernel: NET: Registered PF_INET protocol family Oct 31 01:19:59.025995 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 01:19:59.026002 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 01:19:59.026009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 01:19:59.026016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 01:19:59.026023 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 31 01:19:59.026030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 01:19:59.026039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 01:19:59.026046 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 01:19:59.026053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 01:19:59.026060 kernel: NET: Registered PF_XDP protocol family Oct 31 01:19:59.026141 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 31 01:19:59.026211 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 31 01:19:59.026273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 01:19:59.026333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 01:19:59.026408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 01:19:59.026470 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 01:19:59.026529 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 01:19:59.026590 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 31 01:19:59.026599 kernel: PCI: CLS 0 bytes, default 64 Oct 31 01:19:59.026606 kernel: Initialise system trusted keyrings Oct 31 01:19:59.026614 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 01:19:59.026621 kernel: Key type asymmetric registered Oct 31 01:19:59.026628 kernel: Asymmetric key parser 'x509' registered Oct 31 01:19:59.026637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 01:19:59.026644 kernel: io scheduler mq-deadline registered Oct 31 01:19:59.026660 kernel: io scheduler kyber registered Oct 31 01:19:59.026674 kernel: io scheduler bfq registered Oct 31 01:19:59.026681 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 01:19:59.026689 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 01:19:59.026697 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 01:19:59.026704 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 01:19:59.026712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 01:19:59.026720 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 01:19:59.026728 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 01:19:59.026735 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 01:19:59.026743 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 01:19:59.026751 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 01:19:59.026822 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 01:19:59.026886 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 01:19:59.026948 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T01:19:58 UTC (1761873598) Oct 31 01:19:59.027013 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 01:19:59.027023 kernel: efifb: probing for efifb Oct 31 01:19:59.027030 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 31 01:19:59.027038 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 31 01:19:59.027045 kernel: efifb: scrolling: redraw Oct 31 01:19:59.027052 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 31 01:19:59.027060 kernel: Console: switching to colour frame buffer device 160x50 Oct 31 01:19:59.027067 kernel: fb0: EFI VGA frame buffer device Oct 31 01:19:59.027075 kernel: pstore: Registered efi as persistent store backend Oct 31 01:19:59.027084 kernel: NET: Registered PF_INET6 protocol family Oct 31 01:19:59.027091 kernel: Segment Routing with IPv6 Oct 31 01:19:59.027099 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 01:19:59.027108 kernel: NET: Registered PF_PACKET protocol family Oct 31 01:19:59.027115 kernel: Key type dns_resolver registered Oct 31 01:19:59.027133 kernel: IPI shorthand broadcast: enabled Oct 31 01:19:59.027141 kernel: sched_clock: Marking stable (749697293, 229176610)->(1031784815, -52910912) Oct 31 01:19:59.027148 kernel: registered taskstats version 1 Oct 31 01:19:59.027156 kernel: Loading compiled-in X.509 certificates Oct 31 01:19:59.027164 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 8306d4e745b00e76b5fae2596c709096b7f28adc' Oct 31 01:19:59.027172 kernel: Key type .fscrypt registered Oct 31 01:19:59.027179 kernel: Key type fscrypt-provisioning registered Oct 31 01:19:59.027187 kernel: pstore: Using crash dump compression: deflate Oct 31 01:19:59.027194 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 01:19:59.027203 kernel: ima: Allocated hash algorithm: sha1 Oct 31 01:19:59.027210 kernel: ima: No architecture policies found Oct 31 01:19:59.027218 kernel: clk: Disabling unused clocks Oct 31 01:19:59.027226 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 31 01:19:59.027233 kernel: Write protecting the kernel read-only data: 28672k Oct 31 01:19:59.027240 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 31 01:19:59.027248 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 31 01:19:59.027255 kernel: Run /init as init process Oct 31 01:19:59.027263 kernel: with arguments: Oct 31 01:19:59.027271 kernel: /init Oct 31 01:19:59.027278 kernel: with environment: Oct 31 01:19:59.027286 kernel: HOME=/ Oct 31 01:19:59.027293 kernel: TERM=linux Oct 31 01:19:59.027300 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 01:19:59.027310 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:19:59.027319 systemd[1]: Detected virtualization kvm. Oct 31 01:19:59.027328 systemd[1]: Detected architecture x86-64. Oct 31 01:19:59.027336 systemd[1]: Running in initrd. Oct 31 01:19:59.027344 systemd[1]: No hostname configured, using default hostname. Oct 31 01:19:59.027352 systemd[1]: Hostname set to . Oct 31 01:19:59.027369 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:19:59.027377 systemd[1]: Queued start job for default target initrd.target. Oct 31 01:19:59.027385 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:19:59.027393 systemd[1]: Reached target cryptsetup.target. Oct 31 01:19:59.027400 systemd[1]: Reached target paths.target. Oct 31 01:19:59.027408 systemd[1]: Reached target slices.target. Oct 31 01:19:59.027417 systemd[1]: Reached target swap.target. Oct 31 01:19:59.027424 systemd[1]: Reached target timers.target. Oct 31 01:19:59.027433 systemd[1]: Listening on iscsid.socket. Oct 31 01:19:59.027441 systemd[1]: Listening on iscsiuio.socket. Oct 31 01:19:59.027449 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:19:59.027457 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:19:59.027465 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:19:59.027474 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:19:59.027482 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:19:59.027490 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:19:59.027497 systemd[1]: Reached target sockets.target. Oct 31 01:19:59.027505 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:19:59.027513 systemd[1]: Finished network-cleanup.service. Oct 31 01:19:59.027521 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 01:19:59.027529 systemd[1]: Starting systemd-journald.service... Oct 31 01:19:59.027536 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:19:59.027545 systemd[1]: Starting systemd-resolved.service... Oct 31 01:19:59.027553 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 01:19:59.027561 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:19:59.027569 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 01:19:59.027577 kernel: audit: type=1130 audit(1761873599.020:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.027588 systemd-journald[197]: Journal started Oct 31 01:19:59.027626 systemd-journald[197]: Runtime Journal (/run/log/journal/4a16e851a83e4026b0c86c2010eb4ff2) is 6.0M, max 48.4M, 42.4M free. Oct 31 01:19:59.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.013351 systemd-modules-load[198]: Inserted module 'overlay' Oct 31 01:19:59.039681 systemd[1]: Started systemd-journald.service. Oct 31 01:19:59.039750 kernel: audit: type=1130 audit(1761873599.032:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.038406 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:19:59.044715 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 01:19:59.058519 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 01:19:59.058552 kernel: audit: type=1130 audit(1761873599.046:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.050740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:19:59.066784 kernel: audit: type=1130 audit(1761873599.058:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.061040 systemd-resolved[199]: Positive Trust Anchors: Oct 31 01:19:59.061049 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:19:59.061077 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:19:59.064605 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 01:19:59.096230 kernel: audit: type=1130 audit(1761873599.088:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.086374 systemd-resolved[199]: Defaulting to hostname 'linux'. Oct 31 01:19:59.087550 systemd[1]: Started systemd-resolved.service. Oct 31 01:19:59.089084 systemd[1]: Reached target nss-lookup.target. Oct 31 01:19:59.132648 systemd-modules-load[198]: Inserted module 'br_netfilter' Oct 31 01:19:59.134247 kernel: Bridge firewalling registered Oct 31 01:19:59.140645 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 01:19:59.148720 kernel: audit: type=1130 audit(1761873599.141:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.142716 systemd[1]: Starting dracut-cmdline.service... Oct 31 01:19:59.152379 kernel: SCSI subsystem initialized Oct 31 01:19:59.152895 dracut-cmdline[214]: dracut-dracut-053 Oct 31 01:19:59.155720 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:19:59.167675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 01:19:59.167702 kernel: device-mapper: uevent: version 1.0.3 Oct 31 01:19:59.169792 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 01:19:59.172435 systemd-modules-load[198]: Inserted module 'dm_multipath' Oct 31 01:19:59.173102 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:19:59.182925 kernel: audit: type=1130 audit(1761873599.173:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.174566 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:19:59.185882 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:19:59.193026 kernel: audit: type=1130 audit(1761873599.187:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.231391 kernel: Loading iSCSI transport class v2.0-870. Oct 31 01:19:59.247386 kernel: iscsi: registered transport (tcp) Oct 31 01:19:59.268674 kernel: iscsi: registered transport (qla4xxx) Oct 31 01:19:59.268744 kernel: QLogic iSCSI HBA Driver Oct 31 01:19:59.290156 systemd[1]: Finished dracut-cmdline.service. Oct 31 01:19:59.297569 kernel: audit: type=1130 audit(1761873599.289:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.297577 systemd[1]: Starting dracut-pre-udev.service... Oct 31 01:19:59.345387 kernel: raid6: avx2x4 gen() 29840 MB/s Oct 31 01:19:59.363379 kernel: raid6: avx2x4 xor() 7067 MB/s Oct 31 01:19:59.381385 kernel: raid6: avx2x2 gen() 31327 MB/s Oct 31 01:19:59.399379 kernel: raid6: avx2x2 xor() 18999 MB/s Oct 31 01:19:59.417384 kernel: raid6: avx2x1 gen() 26187 MB/s Oct 31 01:19:59.450379 kernel: raid6: avx2x1 xor() 15214 MB/s Oct 31 01:19:59.468389 kernel: raid6: sse2x4 gen() 14569 MB/s Oct 31 01:19:59.486380 kernel: raid6: sse2x4 xor() 6350 MB/s Oct 31 01:19:59.504381 kernel: raid6: sse2x2 gen() 15681 MB/s Oct 31 01:19:59.522399 kernel: raid6: sse2x2 xor() 8280 MB/s Oct 31 01:19:59.540387 kernel: raid6: sse2x1 gen() 11812 MB/s Oct 31 01:19:59.558861 kernel: raid6: sse2x1 xor() 7440 MB/s Oct 31 01:19:59.558901 kernel: raid6: using algorithm avx2x2 gen() 31327 MB/s Oct 31 01:19:59.558924 kernel: raid6: .... xor() 18999 MB/s, rmw enabled Oct 31 01:19:59.560122 kernel: raid6: using avx2x2 recovery algorithm Oct 31 01:19:59.573381 kernel: xor: automatically using best checksumming function avx Oct 31 01:19:59.673386 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 31 01:19:59.680941 systemd[1]: Finished dracut-pre-udev.service. Oct 31 01:19:59.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.683000 audit: BPF prog-id=7 op=LOAD Oct 31 01:19:59.683000 audit: BPF prog-id=8 op=LOAD Oct 31 01:19:59.684092 systemd[1]: Starting systemd-udevd.service... Oct 31 01:19:59.697815 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 31 01:19:59.703168 systemd[1]: Started systemd-udevd.service. Oct 31 01:19:59.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.704185 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 01:19:59.715401 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Oct 31 01:19:59.740687 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 01:19:59.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.742811 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:19:59.774996 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:19:59.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:19:59.808386 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 01:19:59.833412 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 01:19:59.833442 kernel: GPT:9289727 != 19775487 Oct 31 01:19:59.833456 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 01:19:59.833468 kernel: GPT:9289727 != 19775487 Oct 31 01:19:59.833486 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 01:19:59.833501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:19:59.833514 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 01:19:59.833526 kernel: libata version 3.00 loaded. Oct 31 01:19:59.833539 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 01:19:59.834964 kernel: AES CTR mode by8 optimization enabled Oct 31 01:19:59.834991 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 01:19:59.864523 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 01:19:59.864551 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 01:19:59.864895 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 01:19:59.865131 kernel: scsi host0: ahci Oct 31 01:19:59.865255 kernel: scsi host1: ahci Oct 31 01:19:59.865378 kernel: scsi host2: ahci Oct 31 01:19:59.865517 kernel: scsi host3: ahci Oct 31 01:19:59.865637 kernel: scsi host4: ahci Oct 31 01:19:59.865767 kernel: scsi host5: ahci Oct 31 01:19:59.865900 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 31 01:19:59.865915 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 31 01:19:59.865927 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 31 01:19:59.865958 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 31 01:19:59.865970 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 31 01:19:59.865981 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 31 01:19:59.878380 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (441) Oct 31 01:19:59.883506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 01:19:59.890588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 01:19:59.894639 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 01:19:59.894713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 01:19:59.903250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:19:59.906377 systemd[1]: Starting disk-uuid.service... Oct 31 01:19:59.915063 disk-uuid[539]: Primary Header is updated. Oct 31 01:19:59.915063 disk-uuid[539]: Secondary Entries is updated. Oct 31 01:19:59.915063 disk-uuid[539]: Secondary Header is updated. Oct 31 01:19:59.921389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:19:59.925389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:19:59.929389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:20:00.182129 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 01:20:00.182209 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 01:20:00.182221 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 01:20:00.182231 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 01:20:00.182386 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 01:20:00.184395 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 01:20:00.186266 kernel: ata3.00: applying bridge limits Oct 31 01:20:00.188382 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 01:20:00.188406 kernel: ata3.00: configured for UDMA/100 Oct 31 01:20:00.192396 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 01:20:00.219396 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 01:20:00.238071 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 01:20:00.238106 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 01:20:01.035872 disk-uuid[541]: The operation has completed successfully. Oct 31 01:20:01.038140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:20:01.054742 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 01:20:01.054819 systemd[1]: Finished disk-uuid.service. Oct 31 01:20:01.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.064594 systemd[1]: Starting verity-setup.service... Oct 31 01:20:01.076400 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 01:20:01.094888 systemd[1]: Found device dev-mapper-usr.device. Oct 31 01:20:01.098697 systemd[1]: Mounting sysusr-usr.mount... Oct 31 01:20:01.101441 systemd[1]: Finished verity-setup.service. Oct 31 01:20:01.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.156958 systemd[1]: Mounted sysusr-usr.mount. Oct 31 01:20:01.159251 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 01:20:01.159327 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 31 01:20:01.162027 systemd[1]: Starting ignition-setup.service... Oct 31 01:20:01.164849 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 01:20:01.172589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:20:01.172617 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:20:01.172627 kernel: BTRFS info (device vda6): has skinny extents Oct 31 01:20:01.181125 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 01:20:01.220553 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 01:20:01.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.232000 audit: BPF prog-id=9 op=LOAD Oct 31 01:20:01.232793 systemd[1]: Starting systemd-networkd.service... Oct 31 01:20:01.253250 systemd-networkd[713]: lo: Link UP Oct 31 01:20:01.253259 systemd-networkd[713]: lo: Gained carrier Oct 31 01:20:01.253812 systemd-networkd[713]: Enumeration completed Oct 31 01:20:01.254161 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:20:01.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.256246 systemd-networkd[713]: eth0: Link UP Oct 31 01:20:01.256249 systemd-networkd[713]: eth0: Gained carrier Oct 31 01:20:01.256261 systemd[1]: Started systemd-networkd.service. Oct 31 01:20:01.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.259080 systemd[1]: Reached target network.target. Oct 31 01:20:01.263712 systemd[1]: Starting iscsiuio.service... Oct 31 01:20:01.266422 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 01:20:01.277466 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:20:01.277466 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 31 01:20:01.277466 iscsid[718]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 01:20:01.277466 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 01:20:01.277466 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 01:20:01.277466 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:20:01.277466 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 01:20:01.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.268626 systemd[1]: Started iscsiuio.service. Oct 31 01:20:01.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.270776 systemd[1]: Starting iscsid.service... Oct 31 01:20:01.274576 systemd[1]: Started iscsid.service. Oct 31 01:20:01.278156 systemd[1]: Starting dracut-initqueue.service... Oct 31 01:20:01.287688 systemd[1]: Finished dracut-initqueue.service. Oct 31 01:20:01.291921 systemd[1]: Reached target remote-fs-pre.target. Oct 31 01:20:01.294851 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:20:01.294925 systemd[1]: Reached target remote-fs.target. Oct 31 01:20:01.295681 systemd[1]: Starting dracut-pre-mount.service... Oct 31 01:20:01.306460 systemd[1]: Finished dracut-pre-mount.service. Oct 31 01:20:01.650852 systemd[1]: Finished ignition-setup.service. Oct 31 01:20:01.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.651723 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 01:20:01.688790 ignition[733]: Ignition 2.14.0 Oct 31 01:20:01.688800 ignition[733]: Stage: fetch-offline Oct 31 01:20:01.688949 ignition[733]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:01.688958 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:01.689058 ignition[733]: parsed url from cmdline: "" Oct 31 01:20:01.689061 ignition[733]: no config URL provided Oct 31 01:20:01.689065 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:20:01.689072 ignition[733]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:20:01.689094 ignition[733]: op(1): [started] loading QEMU firmware config module Oct 31 01:20:01.689098 ignition[733]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 01:20:01.691957 ignition[733]: op(1): [finished] loading QEMU firmware config module Oct 31 01:20:01.785820 ignition[733]: parsing config with SHA512: 57a15646ab77523afe9b5f2c9d5e210a199b41ff01e2cb4a608d7be5377a037d049806cd3fd4f72898750a56824d44fe281e36ce48d8f872f1d3b61c90fe26e2 Oct 31 01:20:01.792945 unknown[733]: fetched base config from "system" Oct 31 01:20:01.792954 unknown[733]: fetched user config from "qemu" Oct 31 01:20:01.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.793435 ignition[733]: fetch-offline: fetch-offline passed Oct 31 01:20:01.795184 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 01:20:01.793485 ignition[733]: Ignition finished successfully Oct 31 01:20:01.796684 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 01:20:01.805524 ignition[741]: Ignition 2.14.0 Oct 31 01:20:01.797354 systemd[1]: Starting ignition-kargs.service... Oct 31 01:20:01.805529 ignition[741]: Stage: kargs Oct 31 01:20:01.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.807652 systemd[1]: Finished ignition-kargs.service. Oct 31 01:20:01.805606 ignition[741]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:01.810775 systemd[1]: Starting ignition-disks.service... Oct 31 01:20:01.805615 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:01.806417 ignition[741]: kargs: kargs passed Oct 31 01:20:01.806449 ignition[741]: Ignition finished successfully Oct 31 01:20:01.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.818692 systemd[1]: Finished ignition-disks.service. Oct 31 01:20:01.816841 ignition[748]: Ignition 2.14.0 Oct 31 01:20:01.820898 systemd[1]: Reached target initrd-root-device.target. Oct 31 01:20:01.816846 ignition[748]: Stage: disks Oct 31 01:20:01.823539 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:20:01.816923 ignition[748]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:01.824880 systemd[1]: Reached target local-fs.target. Oct 31 01:20:01.816930 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:01.827397 systemd[1]: Reached target sysinit.target. Oct 31 01:20:01.817834 ignition[748]: disks: disks passed Oct 31 01:20:01.844120 systemd-fsck[756]: ROOT: clean, 637/553520 files, 56032/553472 blocks Oct 31 01:20:01.828651 systemd[1]: Reached target basic.target. Oct 31 01:20:01.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.817866 ignition[748]: Ignition finished successfully Oct 31 01:20:01.829379 systemd[1]: Starting systemd-fsck-root.service... Oct 31 01:20:01.846984 systemd[1]: Finished systemd-fsck-root.service. Oct 31 01:20:01.851074 systemd[1]: Mounting sysroot.mount... Oct 31 01:20:01.860378 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 01:20:01.860755 systemd[1]: Mounted sysroot.mount. Oct 31 01:20:01.861960 systemd[1]: Reached target initrd-root-fs.target. Oct 31 01:20:01.865035 systemd[1]: Mounting sysroot-usr.mount... Oct 31 01:20:01.867428 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 01:20:01.867455 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 01:20:01.867471 systemd[1]: Reached target ignition-diskful.target. Oct 31 01:20:01.870458 systemd[1]: Mounted sysroot-usr.mount. Oct 31 01:20:01.881307 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 01:20:01.873848 systemd[1]: Starting initrd-setup-root.service... Oct 31 01:20:01.885334 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Oct 31 01:20:01.887565 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 01:20:01.889638 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 01:20:01.907646 systemd[1]: Finished initrd-setup-root.service. Oct 31 01:20:01.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.908348 systemd[1]: Starting ignition-mount.service... Oct 31 01:20:01.909454 systemd[1]: Starting sysroot-boot.service... Oct 31 01:20:01.917335 bash[807]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 01:20:01.921962 ignition[809]: INFO : Ignition 2.14.0 Oct 31 01:20:01.921962 ignition[809]: INFO : Stage: mount Oct 31 01:20:01.926574 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:01.926574 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:01.926574 ignition[809]: INFO : mount: mount passed Oct 31 01:20:01.926574 ignition[809]: INFO : Ignition finished successfully Oct 31 01:20:01.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:01.924665 systemd[1]: Finished ignition-mount.service. Oct 31 01:20:01.926722 systemd[1]: Finished sysroot-boot.service. Oct 31 01:20:02.106782 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 01:20:02.115831 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Oct 31 01:20:02.115854 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:20:02.115864 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:20:02.117245 kernel: BTRFS info (device vda6): has skinny extents Oct 31 01:20:02.121336 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 01:20:02.123218 systemd[1]: Starting ignition-files.service... Oct 31 01:20:02.136181 ignition[838]: INFO : Ignition 2.14.0 Oct 31 01:20:02.136181 ignition[838]: INFO : Stage: files Oct 31 01:20:02.139015 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:02.139015 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:02.139015 ignition[838]: DEBUG : files: compiled without relabeling support, skipping Oct 31 01:20:02.139015 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 01:20:02.139015 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 01:20:02.149536 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 01:20:02.149536 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 01:20:02.149536 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 01:20:02.149536 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:20:02.149536 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:20:02.149536 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:20:02.149536 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 01:20:02.139824 unknown[838]: wrote ssh authorized keys file for user: core Oct 31 01:20:02.191220 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 01:20:02.255428 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:20:02.255428 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:20:02.261902 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 01:20:02.596775 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 31 01:20:02.913566 systemd-networkd[713]: eth0: Gained IPv6LL Oct 31 01:20:03.043910 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:20:03.043910 ignition[838]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 01:20:03.051330 ignition[838]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:20:03.126935 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 31 01:20:03.126960 kernel: audit: type=1130 audit(1761873603.070:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.126972 kernel: audit: type=1130 audit(1761873603.088:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.126984 kernel: audit: type=1130 audit(1761873603.098:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.126994 kernel: audit: type=1131 audit(1761873603.098:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.127015 kernel: audit: type=1130 audit(1761873603.126:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.127163 ignition[838]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:20:03.127163 ignition[838]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 01:20:03.127163 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:20:03.127163 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:20:03.127163 ignition[838]: INFO : files: files passed Oct 31 01:20:03.127163 ignition[838]: INFO : Ignition finished successfully Oct 31 01:20:03.162617 kernel: audit: type=1131 audit(1761873603.126:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.162636 kernel: audit: type=1130 audit(1761873603.152:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.067116 systemd[1]: Finished ignition-files.service. Oct 31 01:20:03.071438 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 01:20:03.167403 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 31 01:20:03.231254 kernel: audit: type=1131 audit(1761873603.167:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.231270 kernel: audit: type=1131 audit(1761873603.177:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.231280 kernel: audit: type=1131 audit(1761873603.182:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.081541 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 01:20:03.236441 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:20:03.082116 systemd[1]: Starting ignition-quench.service... Oct 31 01:20:03.084996 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 01:20:03.088816 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 01:20:03.241000 audit: BPF prog-id=6 op=UNLOAD Oct 31 01:20:03.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.088875 systemd[1]: Finished ignition-quench.service. Oct 31 01:20:03.098529 systemd[1]: Reached target ignition-complete.target. Oct 31 01:20:03.113097 systemd[1]: Starting initrd-parse-etc.service... Oct 31 01:20:03.254947 ignition[878]: INFO : Ignition 2.14.0 Oct 31 01:20:03.254947 ignition[878]: INFO : Stage: umount Oct 31 01:20:03.254947 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:20:03.254947 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:20:03.254947 ignition[878]: INFO : umount: umount passed Oct 31 01:20:03.254947 ignition[878]: INFO : Ignition finished successfully Oct 31 01:20:03.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.123617 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 01:20:03.123685 systemd[1]: Finished initrd-parse-etc.service. Oct 31 01:20:03.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.127038 systemd[1]: Reached target initrd-fs.target. Oct 31 01:20:03.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.138302 systemd[1]: Reached target initrd.target. Oct 31 01:20:03.142191 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 01:20:03.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.142770 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 01:20:03.151215 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 01:20:03.153528 systemd[1]: Starting initrd-cleanup.service... Oct 31 01:20:03.165332 systemd[1]: Stopped target nss-lookup.target. Oct 31 01:20:03.167474 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 01:20:03.167630 systemd[1]: Stopped target timers.target. Oct 31 01:20:03.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.167884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 01:20:03.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:03.167967 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 01:20:03.168263 systemd[1]: Stopped target initrd.target. Oct 31 01:20:03.173890 systemd[1]: Stopped target basic.target. Oct 31 01:20:03.309000 audit: BPF prog-id=5 op=UNLOAD Oct 31 01:20:03.309000 audit: BPF prog-id=4 op=UNLOAD Oct 31 01:20:03.309000 audit: BPF prog-id=3 op=UNLOAD Oct 31 01:20:03.174181 systemd[1]: Stopped target ignition-complete.target. Oct 31 01:20:03.174766 systemd[1]: Stopped target ignition-diskful.target. Oct 31 01:20:03.311000 audit: BPF prog-id=8 op=UNLOAD Oct 31 01:20:03.311000 audit: BPF prog-id=7 op=UNLOAD Oct 31 01:20:03.175075 systemd[1]: Stopped target initrd-root-device.target. Oct 31 01:20:03.175367 systemd[1]: Stopped target remote-fs.target. Oct 31 01:20:03.175627 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 01:20:03.175917 systemd[1]: Stopped target sysinit.target. Oct 31 01:20:03.176212 systemd[1]: Stopped target local-fs.target. Oct 31 01:20:03.176777 systemd[1]: Stopped target local-fs-pre.target. Oct 31 01:20:03.177074 systemd[1]: Stopped target swap.target. Oct 31 01:20:03.177339 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 01:20:03.177433 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 01:20:03.177702 systemd[1]: Stopped target cryptsetup.target. Oct 31 01:20:03.183074 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 01:20:03.183157 systemd[1]: Stopped dracut-initqueue.service. Oct 31 01:20:03.333677 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Oct 31 01:20:03.333702 iscsid[718]: iscsid shutting down. Oct 31 01:20:03.183425 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 01:20:03.183530 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 01:20:03.188779 systemd[1]: Stopped target paths.target. Oct 31 01:20:03.188955 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 01:20:03.193398 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 01:20:03.193781 systemd[1]: Stopped target slices.target. Oct 31 01:20:03.194037 systemd[1]: Stopped target sockets.target. Oct 31 01:20:03.194314 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 01:20:03.194395 systemd[1]: Closed iscsid.socket. Oct 31 01:20:03.194614 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 01:20:03.194670 systemd[1]: Closed iscsiuio.socket. Oct 31 01:20:03.194898 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 01:20:03.194976 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 01:20:03.195206 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 01:20:03.195294 systemd[1]: Stopped ignition-files.service. Oct 31 01:20:03.196377 systemd[1]: Stopping ignition-mount.service... Oct 31 01:20:03.196550 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 01:20:03.196652 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 01:20:03.197456 systemd[1]: Stopping sysroot-boot.service... Oct 31 01:20:03.197955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 01:20:03.198092 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 01:20:03.198442 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 01:20:03.198558 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 01:20:03.202924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 01:20:03.206560 systemd[1]: Finished initrd-cleanup.service. Oct 31 01:20:03.208457 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 01:20:03.208545 systemd[1]: Stopped ignition-mount.service. Oct 31 01:20:03.208653 systemd[1]: Stopped target network.target. Oct 31 01:20:03.208859 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 01:20:03.208897 systemd[1]: Stopped ignition-disks.service. Oct 31 01:20:03.209173 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 01:20:03.209207 systemd[1]: Stopped ignition-kargs.service. Oct 31 01:20:03.209780 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 01:20:03.209815 systemd[1]: Stopped ignition-setup.service. Oct 31 01:20:03.210154 systemd[1]: Stopping systemd-networkd.service... Oct 31 01:20:03.210805 systemd[1]: Stopping systemd-resolved.service... Oct 31 01:20:03.211910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 01:20:03.231298 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 01:20:03.231390 systemd[1]: Stopped systemd-resolved.service. Oct 31 01:20:03.240476 systemd-networkd[713]: eth0: DHCPv6 lease lost Oct 31 01:20:03.341000 audit: BPF prog-id=9 op=UNLOAD Oct 31 01:20:03.242510 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 01:20:03.242596 systemd[1]: Stopped systemd-networkd.service. Oct 31 01:20:03.246182 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 01:20:03.246207 systemd[1]: Closed systemd-networkd.socket. Oct 31 01:20:03.250039 systemd[1]: Stopping network-cleanup.service... Oct 31 01:20:03.251800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 01:20:03.251863 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 01:20:03.254978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 01:20:03.255031 systemd[1]: Stopped systemd-sysctl.service. Oct 31 01:20:03.256449 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 01:20:03.256489 systemd[1]: Stopped systemd-modules-load.service. Oct 31 01:20:03.259127 systemd[1]: Stopping systemd-udevd.service... Oct 31 01:20:03.262223 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 01:20:03.265568 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 01:20:03.265639 systemd[1]: Stopped network-cleanup.service. Oct 31 01:20:03.268149 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 01:20:03.268238 systemd[1]: Stopped systemd-udevd.service. Oct 31 01:20:03.271304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 01:20:03.271337 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 01:20:03.274286 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 01:20:03.274311 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 01:20:03.275689 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 01:20:03.275721 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 01:20:03.278435 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 01:20:03.278476 systemd[1]: Stopped dracut-cmdline.service. Oct 31 01:20:03.279834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:20:03.279876 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 01:20:03.283120 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 01:20:03.285185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:20:03.285224 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 01:20:03.287868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 01:20:03.287941 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 01:20:03.296989 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 01:20:03.297077 systemd[1]: Stopped sysroot-boot.service. Oct 31 01:20:03.298947 systemd[1]: Reached target initrd-switch-root.target. Oct 31 01:20:03.301593 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 01:20:03.301634 systemd[1]: Stopped initrd-setup-root.service. Oct 31 01:20:03.303614 systemd[1]: Starting initrd-switch-root.service... Oct 31 01:20:03.308544 systemd[1]: Switching root. Oct 31 01:20:03.343018 systemd-journald[197]: Journal stopped Oct 31 01:20:05.916147 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 01:20:05.916203 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 01:20:05.916222 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 01:20:05.916236 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 01:20:05.916254 kernel: SELinux: policy capability open_perms=1 Oct 31 01:20:05.916267 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 01:20:05.916284 kernel: SELinux: policy capability always_check_network=0 Oct 31 01:20:05.916297 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 01:20:05.916310 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 01:20:05.916326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 01:20:05.916338 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 01:20:05.916353 systemd[1]: Successfully loaded SELinux policy in 47.858ms. Oct 31 01:20:05.916393 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.398ms. Oct 31 01:20:05.916414 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:20:05.916428 systemd[1]: Detected virtualization kvm. Oct 31 01:20:05.916442 systemd[1]: Detected architecture x86-64. Oct 31 01:20:05.916456 systemd[1]: Detected first boot. Oct 31 01:20:05.916470 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:20:05.916484 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 01:20:05.916498 systemd[1]: Populated /etc with preset unit settings. Oct 31 01:20:05.916514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:20:05.916530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:20:05.916546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:20:05.916566 systemd[1]: Queued start job for default target multi-user.target. Oct 31 01:20:05.916580 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 31 01:20:05.916596 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 01:20:05.916610 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 01:20:05.916624 systemd[1]: Created slice system-getty.slice. Oct 31 01:20:05.916639 systemd[1]: Created slice system-modprobe.slice. Oct 31 01:20:05.916653 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 01:20:05.916668 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 01:20:05.916682 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 01:20:05.916697 systemd[1]: Created slice user.slice. Oct 31 01:20:05.916711 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:20:05.916729 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 01:20:05.916743 systemd[1]: Set up automount boot.automount. Oct 31 01:20:05.916757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 01:20:05.916774 systemd[1]: Reached target integritysetup.target. Oct 31 01:20:05.916788 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:20:05.916802 systemd[1]: Reached target remote-fs.target. Oct 31 01:20:05.916816 systemd[1]: Reached target slices.target. Oct 31 01:20:05.916830 systemd[1]: Reached target swap.target. Oct 31 01:20:05.916845 systemd[1]: Reached target torcx.target. Oct 31 01:20:05.916859 systemd[1]: Reached target veritysetup.target. Oct 31 01:20:05.916873 systemd[1]: Listening on systemd-coredump.socket. Oct 31 01:20:05.916887 systemd[1]: Listening on systemd-initctl.socket. Oct 31 01:20:05.916901 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:20:05.916914 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:20:05.916939 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:20:05.916953 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:20:05.916967 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:20:05.916980 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:20:05.916997 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 01:20:05.917013 systemd[1]: Mounting dev-hugepages.mount... Oct 31 01:20:05.917027 systemd[1]: Mounting dev-mqueue.mount... Oct 31 01:20:05.917041 systemd[1]: Mounting media.mount... Oct 31 01:20:05.917055 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:05.917070 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 01:20:05.917084 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 01:20:05.917099 systemd[1]: Mounting tmp.mount... Oct 31 01:20:05.917113 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 01:20:05.917129 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:20:05.917142 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:20:05.917156 systemd[1]: Starting modprobe@configfs.service... Oct 31 01:20:05.917170 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:20:05.917183 systemd[1]: Starting modprobe@drm.service... Oct 31 01:20:05.917197 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:20:05.917210 systemd[1]: Starting modprobe@fuse.service... Oct 31 01:20:05.917224 systemd[1]: Starting modprobe@loop.service... Oct 31 01:20:05.917238 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 01:20:05.917254 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 01:20:05.917267 kernel: loop: module loaded Oct 31 01:20:05.917281 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 01:20:05.917294 kernel: fuse: init (API version 7.34) Oct 31 01:20:05.917308 systemd[1]: Starting systemd-journald.service... Oct 31 01:20:05.917322 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:20:05.917336 systemd[1]: Starting systemd-network-generator.service... Oct 31 01:20:05.917349 systemd[1]: Starting systemd-remount-fs.service... Oct 31 01:20:05.917412 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:20:05.917434 systemd-journald[1039]: Journal started Oct 31 01:20:05.917484 systemd-journald[1039]: Runtime Journal (/run/log/journal/4a16e851a83e4026b0c86c2010eb4ff2) is 6.0M, max 48.4M, 42.4M free. Oct 31 01:20:05.787000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 01:20:05.787000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 01:20:05.914000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 01:20:05.914000 audit[1039]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffeaf52c3b0 a2=4000 a3=7ffeaf52c44c items=0 ppid=1 pid=1039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:05.914000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 01:20:05.923414 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:05.926939 systemd[1]: Started systemd-journald.service. Oct 31 01:20:05.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.927974 systemd[1]: Mounted dev-hugepages.mount. Oct 31 01:20:05.929497 systemd[1]: Mounted dev-mqueue.mount. Oct 31 01:20:05.930926 systemd[1]: Mounted media.mount. Oct 31 01:20:05.932206 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 01:20:05.933628 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 01:20:05.935126 systemd[1]: Mounted tmp.mount. Oct 31 01:20:05.936577 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 01:20:05.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.938423 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:20:05.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.940082 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 01:20:05.940235 systemd[1]: Finished modprobe@configfs.service. Oct 31 01:20:05.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.941928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:20:05.942059 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:20:05.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.943709 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:20:05.943835 systemd[1]: Finished modprobe@drm.service. Oct 31 01:20:05.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.945412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:20:05.945550 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:20:05.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.947328 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 01:20:05.947470 systemd[1]: Finished modprobe@fuse.service. Oct 31 01:20:05.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.949035 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:20:05.949249 systemd[1]: Finished modprobe@loop.service. Oct 31 01:20:05.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.951426 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:20:05.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.953706 systemd[1]: Finished systemd-network-generator.service. Oct 31 01:20:05.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.955680 systemd[1]: Finished systemd-remount-fs.service. Oct 31 01:20:05.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.957542 systemd[1]: Reached target network-pre.target. Oct 31 01:20:05.959978 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 01:20:05.962953 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 01:20:05.964752 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 01:20:05.966380 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 01:20:05.968812 systemd[1]: Starting systemd-journal-flush.service... Oct 31 01:20:05.970774 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:20:05.973256 systemd-journald[1039]: Time spent on flushing to /var/log/journal/4a16e851a83e4026b0c86c2010eb4ff2 is 12.635ms for 1091 entries. Oct 31 01:20:05.973256 systemd-journald[1039]: System Journal (/var/log/journal/4a16e851a83e4026b0c86c2010eb4ff2) is 8.0M, max 195.6M, 187.6M free. Oct 31 01:20:06.205390 systemd-journald[1039]: Received client request to flush runtime journal. Oct 31 01:20:05.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:05.971716 systemd[1]: Starting systemd-random-seed.service... Oct 31 01:20:05.975854 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:20:05.976733 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:20:05.979603 systemd[1]: Starting systemd-sysusers.service... Oct 31 01:20:05.984779 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:20:06.206523 udevadm[1063]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 01:20:05.986465 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 01:20:05.988110 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 01:20:05.989980 systemd[1]: Finished systemd-random-seed.service. Oct 31 01:20:05.991946 systemd[1]: Reached target first-boot-complete.target. Oct 31 01:20:05.995321 systemd[1]: Starting systemd-udev-settle.service... Oct 31 01:20:06.001896 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:20:06.003981 systemd[1]: Finished systemd-sysusers.service. Oct 31 01:20:06.007396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:20:06.022618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:20:06.206386 systemd[1]: Finished systemd-journal-flush.service. Oct 31 01:20:06.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.499290 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 01:20:06.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.502503 systemd[1]: Starting systemd-udevd.service... Oct 31 01:20:06.518503 systemd-udevd[1073]: Using default interface naming scheme 'v252'. Oct 31 01:20:06.533344 systemd[1]: Started systemd-udevd.service. Oct 31 01:20:06.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.536495 systemd[1]: Starting systemd-networkd.service... Oct 31 01:20:06.542152 systemd[1]: Starting systemd-userdbd.service... Oct 31 01:20:06.564976 systemd[1]: Found device dev-ttyS0.device. Oct 31 01:20:06.581483 systemd[1]: Started systemd-userdbd.service. Oct 31 01:20:06.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.604813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:20:06.614388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 01:20:06.624485 kernel: ACPI: button: Power Button [PWRF] Oct 31 01:20:06.637693 systemd-networkd[1079]: lo: Link UP Oct 31 01:20:06.637703 systemd-networkd[1079]: lo: Gained carrier Oct 31 01:20:06.638177 systemd-networkd[1079]: Enumeration completed Oct 31 01:20:06.638306 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:20:06.638340 systemd[1]: Started systemd-networkd.service. Oct 31 01:20:06.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.637000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 31 01:20:06.640442 systemd-networkd[1079]: eth0: Link UP Oct 31 01:20:06.637000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5632429974c0 a1=338ec a2=7fac79981bc5 a3=5 items=110 ppid=1073 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:06.637000 audit: CWD cwd="/" Oct 31 01:20:06.637000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=1 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=2 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=3 name=(null) inode=15570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=4 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=5 name=(null) inode=15571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=6 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=7 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=8 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=9 name=(null) inode=15573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=10 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=11 name=(null) inode=15574 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=12 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=13 name=(null) inode=15575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=14 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=15 name=(null) inode=15576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=16 name=(null) inode=15572 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=17 name=(null) inode=15577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=18 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=19 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=20 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=21 name=(null) inode=15579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=22 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=23 name=(null) inode=15580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=24 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=25 name=(null) inode=15581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=26 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=27 name=(null) inode=15582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=28 name=(null) inode=15578 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=29 name=(null) inode=15583 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=30 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=31 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=32 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=33 name=(null) inode=15585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=34 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=35 name=(null) inode=15586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=36 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=37 name=(null) inode=15587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=38 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=39 name=(null) inode=15588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=40 name=(null) inode=15584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=41 name=(null) inode=15589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=42 name=(null) inode=15569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=43 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=44 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=45 name=(null) inode=15591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=46 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=47 name=(null) inode=15592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=48 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=49 name=(null) inode=15593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=50 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=51 name=(null) inode=15594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=52 name=(null) inode=15590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=53 name=(null) inode=15595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=55 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=56 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=57 name=(null) inode=15597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=58 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=59 name=(null) inode=15598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=60 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=61 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=62 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=63 name=(null) inode=15600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=64 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=65 name=(null) inode=15601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=66 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=67 name=(null) inode=15602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=68 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=69 name=(null) inode=15603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=70 name=(null) inode=15599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=71 name=(null) inode=15604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=72 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=73 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=74 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=75 name=(null) inode=15606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=76 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=77 name=(null) inode=15607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=78 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=79 name=(null) inode=15608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=80 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=81 name=(null) inode=15609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=82 name=(null) inode=15605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=83 name=(null) inode=15610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=84 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=85 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=86 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=87 name=(null) inode=15612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=88 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=89 name=(null) inode=15613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=90 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=91 name=(null) inode=15614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=92 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=93 name=(null) inode=15615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=94 name=(null) inode=15611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=95 name=(null) inode=15616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=96 name=(null) inode=15596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=97 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=98 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=99 name=(null) inode=15618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=100 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=101 name=(null) inode=15619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=102 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=103 name=(null) inode=15620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=104 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=105 name=(null) inode=15621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=106 name=(null) inode=15617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=107 name=(null) inode=15622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PATH item=109 name=(null) inode=15623 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:20:06.637000 audit: PROCTITLE proctitle="(udev-worker)" Oct 31 01:20:06.640446 systemd-networkd[1079]: eth0: Gained carrier Oct 31 01:20:06.656532 systemd-networkd[1079]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 01:20:06.658494 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 01:20:06.673346 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 31 01:20:06.678803 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 01:20:06.678928 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 01:20:06.679040 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 01:20:06.685411 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 01:20:06.716669 kernel: kvm: Nested Virtualization enabled Oct 31 01:20:06.716748 kernel: SVM: kvm: Nested Paging enabled Oct 31 01:20:06.716767 kernel: SVM: Virtual VMLOAD VMSAVE supported Oct 31 01:20:06.718409 kernel: SVM: Virtual GIF supported Oct 31 01:20:06.738392 kernel: EDAC MC: Ver: 3.0.0 Oct 31 01:20:06.761739 systemd[1]: Finished systemd-udev-settle.service. Oct 31 01:20:06.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.764575 systemd[1]: Starting lvm2-activation-early.service... Oct 31 01:20:06.773267 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:20:06.799219 systemd[1]: Finished lvm2-activation-early.service. Oct 31 01:20:06.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.800789 systemd[1]: Reached target cryptsetup.target. Oct 31 01:20:06.803146 systemd[1]: Starting lvm2-activation.service... Oct 31 01:20:06.806371 lvm[1113]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:20:06.837274 systemd[1]: Finished lvm2-activation.service. Oct 31 01:20:06.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.839016 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:20:06.840427 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 01:20:06.840448 systemd[1]: Reached target local-fs.target. Oct 31 01:20:06.841727 systemd[1]: Reached target machines.target. Oct 31 01:20:06.844259 systemd[1]: Starting ldconfig.service... Oct 31 01:20:06.845746 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:20:06.845789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:06.846663 systemd[1]: Starting systemd-boot-update.service... Oct 31 01:20:06.849252 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 01:20:06.852724 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 01:20:06.858870 systemd[1]: Starting systemd-sysext.service... Oct 31 01:20:06.861119 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1116 (bootctl) Oct 31 01:20:06.863051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 01:20:06.866188 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 01:20:06.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:06.872637 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 01:20:06.876571 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 01:20:06.876805 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 01:20:06.887391 kernel: loop0: detected capacity change from 0 to 224512 Oct 31 01:20:07.058389 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 01:20:07.059101 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 01:20:07.060421 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 01:20:07.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.064214 systemd-fsck[1128]: fsck.fat 4.2 (2021-01-31) Oct 31 01:20:07.064214 systemd-fsck[1128]: /dev/vda1: 791 files, 120792/258078 clusters Oct 31 01:20:07.065964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 01:20:07.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.069270 systemd[1]: Mounting boot.mount... Oct 31 01:20:07.077319 systemd[1]: Mounted boot.mount. Oct 31 01:20:07.082390 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 01:20:07.087601 (sd-sysext)[1136]: Using extensions 'kubernetes'. Oct 31 01:20:07.088000 (sd-sysext)[1136]: Merged extensions into '/usr'. Oct 31 01:20:07.089247 systemd[1]: Finished systemd-boot-update.service. Oct 31 01:20:07.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.102127 ldconfig[1115]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 01:20:07.104610 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.105956 systemd[1]: Mounting usr-share-oem.mount... Oct 31 01:20:07.107272 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.108270 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:20:07.110511 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:20:07.112691 systemd[1]: Starting modprobe@loop.service... Oct 31 01:20:07.114025 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.114315 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:07.114615 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.118742 systemd[1]: Finished ldconfig.service. Oct 31 01:20:07.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.120303 systemd[1]: Mounted usr-share-oem.mount. Oct 31 01:20:07.121945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:20:07.122126 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:20:07.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.123786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:20:07.123930 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:20:07.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.125630 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:20:07.125771 systemd[1]: Finished modprobe@loop.service. Oct 31 01:20:07.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.127413 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:20:07.127507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.128481 systemd[1]: Finished systemd-sysext.service. Oct 31 01:20:07.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.130961 systemd[1]: Starting ensure-sysext.service... Oct 31 01:20:07.132932 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 01:20:07.137816 systemd[1]: Reloading. Oct 31 01:20:07.141420 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 01:20:07.142079 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 01:20:07.143442 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 01:20:07.188519 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-10-31T01:20:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:20:07.188847 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-10-31T01:20:07Z" level=info msg="torcx already run" Oct 31 01:20:07.247875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:20:07.247903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:20:07.265396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:20:07.320598 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 01:20:07.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.326078 systemd[1]: Starting audit-rules.service... Oct 31 01:20:07.328827 systemd[1]: Starting clean-ca-certificates.service... Oct 31 01:20:07.331683 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 01:20:07.334714 systemd[1]: Starting systemd-resolved.service... Oct 31 01:20:07.337328 systemd[1]: Starting systemd-timesyncd.service... Oct 31 01:20:07.340054 systemd[1]: Starting systemd-update-utmp.service... Oct 31 01:20:07.342437 systemd[1]: Finished clean-ca-certificates.service. Oct 31 01:20:07.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.344000 audit[1234]: SYSTEM_BOOT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.350840 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 01:20:07.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:07.355730 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.356036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.359223 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:20:07.360000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 01:20:07.360000 audit[1243]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd77b73860 a2=420 a3=0 items=0 ppid=1221 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:07.360000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 01:20:07.361314 augenrules[1243]: No rules Oct 31 01:20:07.362549 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:20:07.366985 systemd[1]: Starting modprobe@loop.service... Oct 31 01:20:07.368438 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.368590 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:07.369889 systemd[1]: Starting systemd-update-done.service... Oct 31 01:20:07.371406 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:20:07.371526 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.372986 systemd[1]: Finished audit-rules.service. Oct 31 01:20:07.375269 systemd[1]: Finished systemd-update-utmp.service. Oct 31 01:20:07.377286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:20:07.377446 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:20:07.379698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:20:07.379942 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:20:07.382011 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:20:07.382177 systemd[1]: Finished modprobe@loop.service. Oct 31 01:20:07.384918 systemd[1]: Finished systemd-update-done.service. Oct 31 01:20:07.387959 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:20:07.388058 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.390080 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.390265 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.391600 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:20:07.393751 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:20:07.396022 systemd[1]: Starting modprobe@loop.service... Oct 31 01:20:07.398665 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.398781 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:07.398862 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:20:07.398957 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.399754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:20:07.399902 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:20:07.401857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:20:07.402043 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:20:07.403923 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:20:07.404055 systemd[1]: Finished modprobe@loop.service. Oct 31 01:20:07.405692 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:20:07.405776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.408353 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.408621 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.410168 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:20:07.413032 systemd[1]: Starting modprobe@drm.service... Oct 31 01:20:07.415957 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:20:07.418977 systemd[1]: Starting modprobe@loop.service... Oct 31 01:20:07.420374 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.420536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:07.422681 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 01:20:07.424507 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:20:07.424630 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:20:07.425952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:20:07.426108 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:20:07.428127 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:20:07.428312 systemd[1]: Finished modprobe@drm.service. Oct 31 01:20:07.429167 systemd-resolved[1228]: Positive Trust Anchors: Oct 31 01:20:07.429191 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:20:07.429229 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:20:07.430781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:20:07.430982 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:20:07.433289 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:20:07.433534 systemd[1]: Finished modprobe@loop.service. Oct 31 01:20:07.436047 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:20:07.436168 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:20:07.438833 systemd[1]: Finished ensure-sysext.service. Oct 31 01:20:07.439378 systemd-resolved[1228]: Defaulting to hostname 'linux'. Oct 31 01:20:07.440959 systemd[1]: Started systemd-resolved.service. Oct 31 01:20:07.442600 systemd[1]: Reached target network.target. Oct 31 01:20:07.444123 systemd[1]: Reached target nss-lookup.target. Oct 31 01:20:07.464294 systemd[1]: Started systemd-timesyncd.service. Oct 31 01:20:08.081655 systemd-resolved[1228]: Clock change detected. Flushing caches. Oct 31 01:20:08.082082 systemd-timesyncd[1232]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 01:20:08.082161 systemd-timesyncd[1232]: Initial clock synchronization to Fri 2025-10-31 01:20:08.081590 UTC. Oct 31 01:20:08.082554 systemd[1]: Reached target sysinit.target. Oct 31 01:20:08.084146 systemd[1]: Started motdgen.path. Oct 31 01:20:08.085364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 01:20:08.087270 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 01:20:08.088959 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 01:20:08.088985 systemd[1]: Reached target paths.target. Oct 31 01:20:08.090454 systemd[1]: Reached target time-set.target. Oct 31 01:20:08.092157 systemd[1]: Started logrotate.timer. Oct 31 01:20:08.093642 systemd[1]: Started mdadm.timer. Oct 31 01:20:08.094950 systemd[1]: Reached target timers.target. Oct 31 01:20:08.096721 systemd[1]: Listening on dbus.socket. Oct 31 01:20:08.099156 systemd[1]: Starting docker.socket... Oct 31 01:20:08.101430 systemd[1]: Listening on sshd.socket. Oct 31 01:20:08.102835 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:08.103107 systemd[1]: Listening on docker.socket. Oct 31 01:20:08.104557 systemd[1]: Reached target sockets.target. Oct 31 01:20:08.106060 systemd[1]: Reached target basic.target. Oct 31 01:20:08.107595 systemd[1]: System is tainted: cgroupsv1 Oct 31 01:20:08.107636 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:20:08.107653 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:20:08.108576 systemd[1]: Starting containerd.service... Oct 31 01:20:08.111039 systemd[1]: Starting dbus.service... Oct 31 01:20:08.113721 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 01:20:08.116418 systemd[1]: Starting extend-filesystems.service... Oct 31 01:20:08.118045 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 01:20:08.119234 systemd[1]: Starting motdgen.service... Oct 31 01:20:08.121871 systemd[1]: Starting prepare-helm.service... Oct 31 01:20:08.124708 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 01:20:08.127815 systemd[1]: Starting sshd-keygen.service... Oct 31 01:20:08.132086 jq[1284]: false Oct 31 01:20:08.139210 dbus-daemon[1283]: [system] SELinux support is enabled Oct 31 01:20:08.143664 extend-filesystems[1285]: Found loop1 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found sr0 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda1 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda2 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda3 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found usr Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda4 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda6 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda7 Oct 31 01:20:08.143664 extend-filesystems[1285]: Found vda9 Oct 31 01:20:08.143664 extend-filesystems[1285]: Checking size of /dev/vda9 Oct 31 01:20:08.200566 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 01:20:08.135702 systemd[1]: Starting systemd-logind.service... Oct 31 01:20:08.200708 extend-filesystems[1285]: Resized partition /dev/vda9 Oct 31 01:20:08.137180 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:20:08.202561 extend-filesystems[1313]: resize2fs 1.46.5 (30-Dec-2021) Oct 31 01:20:08.137249 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 01:20:08.207352 update_engine[1303]: I1031 01:20:08.193131 1303 main.cc:92] Flatcar Update Engine starting Oct 31 01:20:08.207352 update_engine[1303]: I1031 01:20:08.194767 1303 update_check_scheduler.cc:74] Next update check in 2m53s Oct 31 01:20:08.138661 systemd[1]: Starting update-engine.service... Oct 31 01:20:08.207726 jq[1308]: true Oct 31 01:20:08.141694 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 01:20:08.145371 systemd[1]: Started dbus.service. Oct 31 01:20:08.208223 jq[1317]: true Oct 31 01:20:08.149438 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 01:20:08.208499 tar[1316]: linux-amd64/LICENSE Oct 31 01:20:08.208499 tar[1316]: linux-amd64/helm Oct 31 01:20:08.150181 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 01:20:08.208877 env[1318]: time="2025-10-31T01:20:08.202480383Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 01:20:08.150453 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 01:20:08.150638 systemd[1]: Finished motdgen.service. Oct 31 01:20:08.153058 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 01:20:08.153279 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 01:20:08.156187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 01:20:08.156214 systemd[1]: Reached target system-config.target. Oct 31 01:20:08.159266 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 01:20:08.159280 systemd[1]: Reached target user-config.target. Oct 31 01:20:08.195659 systemd[1]: Started update-engine.service. Oct 31 01:20:08.198977 systemd[1]: Started locksmithd.service. Oct 31 01:20:08.215424 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 01:20:08.225707 env[1318]: time="2025-10-31T01:20:08.225666275Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 01:20:08.240634 env[1318]: time="2025-10-31T01:20:08.240438519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.240697 systemd-logind[1300]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 01:20:08.240714 systemd-logind[1300]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 01:20:08.241521 extend-filesystems[1313]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 01:20:08.241521 extend-filesystems[1313]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 01:20:08.241521 extend-filesystems[1313]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 01:20:08.240968 systemd-logind[1300]: New seat seat0. Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.243069282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.243098006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244021849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244039221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244064799Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244074237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244139569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.244929481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.246114473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:20:08.247873 env[1318]: time="2025-10-31T01:20:08.246133328Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 01:20:08.248097 extend-filesystems[1285]: Resized filesystem in /dev/vda9 Oct 31 01:20:08.247730 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 01:20:08.251443 env[1318]: time="2025-10-31T01:20:08.246186378Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 01:20:08.251443 env[1318]: time="2025-10-31T01:20:08.246195976Z" level=info msg="metadata content store policy set" policy=shared Oct 31 01:20:08.248008 systemd[1]: Finished extend-filesystems.service. Oct 31 01:20:08.252105 systemd[1]: Started systemd-logind.service. Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258001626Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258052281Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258068782Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258108586Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258129435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258145976Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258161626Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258179058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258194437Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258216158Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258232539Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258247807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258353325Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 01:20:08.260423 env[1318]: time="2025-10-31T01:20:08.258471086Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258863272Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258889992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258918575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258967216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258983808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.258997644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259010838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259024945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259041476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259056223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259069939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259085098Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259212296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259229558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.260878 env[1318]: time="2025-10-31T01:20:08.259244106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.261281 env[1318]: time="2025-10-31T01:20:08.259258403Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 01:20:08.261281 env[1318]: time="2025-10-31T01:20:08.259276917Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 01:20:08.261281 env[1318]: time="2025-10-31T01:20:08.259290222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 01:20:08.261281 env[1318]: time="2025-10-31T01:20:08.259313135Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 01:20:08.261281 env[1318]: time="2025-10-31T01:20:08.259353681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 01:20:08.261434 env[1318]: time="2025-10-31T01:20:08.259596777Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 01:20:08.261434 env[1318]: time="2025-10-31T01:20:08.259667921Z" level=info msg="Connect containerd service" Oct 31 01:20:08.261434 env[1318]: time="2025-10-31T01:20:08.259704539Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 01:20:08.261434 env[1318]: time="2025-10-31T01:20:08.260236206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:20:08.262231 env[1318]: time="2025-10-31T01:20:08.261698388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 01:20:08.262231 env[1318]: time="2025-10-31T01:20:08.261737502Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 01:20:08.262231 env[1318]: time="2025-10-31T01:20:08.261775784Z" level=info msg="containerd successfully booted in 0.059863s" Oct 31 01:20:08.261830 systemd[1]: Started containerd.service. Oct 31 01:20:08.264582 env[1318]: time="2025-10-31T01:20:08.262068192Z" level=info msg="Start subscribing containerd event" Oct 31 01:20:08.264774 env[1318]: time="2025-10-31T01:20:08.264686101Z" level=info msg="Start recovering state" Oct 31 01:20:08.266363 bash[1345]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:20:08.266259 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 01:20:08.271611 env[1318]: time="2025-10-31T01:20:08.265445325Z" level=info msg="Start event monitor" Oct 31 01:20:08.271776 env[1318]: time="2025-10-31T01:20:08.271751632Z" level=info msg="Start snapshots syncer" Oct 31 01:20:08.271918 env[1318]: time="2025-10-31T01:20:08.271884421Z" level=info msg="Start cni network conf syncer for default" Oct 31 01:20:08.272007 env[1318]: time="2025-10-31T01:20:08.271985641Z" level=info msg="Start streaming server" Oct 31 01:20:08.275875 locksmithd[1333]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 01:20:08.606743 tar[1316]: linux-amd64/README.md Oct 31 01:20:08.611849 systemd[1]: Finished prepare-helm.service. Oct 31 01:20:09.033596 systemd-networkd[1079]: eth0: Gained IPv6LL Oct 31 01:20:09.035588 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 01:20:09.037925 systemd[1]: Reached target network-online.target. Oct 31 01:20:09.041043 systemd[1]: Starting kubelet.service... Oct 31 01:20:09.350742 sshd_keygen[1314]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 01:20:09.369701 systemd[1]: Finished sshd-keygen.service. Oct 31 01:20:09.372889 systemd[1]: Starting issuegen.service... Oct 31 01:20:09.377965 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 01:20:09.378263 systemd[1]: Finished issuegen.service. Oct 31 01:20:09.381644 systemd[1]: Starting systemd-user-sessions.service... Oct 31 01:20:09.388120 systemd[1]: Finished systemd-user-sessions.service. Oct 31 01:20:09.391474 systemd[1]: Started getty@tty1.service. Oct 31 01:20:09.394017 systemd[1]: Started serial-getty@ttyS0.service. Oct 31 01:20:09.395644 systemd[1]: Reached target getty.target. Oct 31 01:20:09.701190 systemd[1]: Started kubelet.service. Oct 31 01:20:09.703701 systemd[1]: Reached target multi-user.target. Oct 31 01:20:09.706951 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 01:20:09.713999 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 01:20:09.714263 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 01:20:09.719233 systemd[1]: Startup finished in 5.595s (kernel) + 5.730s (userspace) = 11.325s. Oct 31 01:20:10.134216 kubelet[1384]: E1031 01:20:10.134072 1384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:20:10.136815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:20:10.137019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:20:12.602017 systemd[1]: Created slice system-sshd.slice. Oct 31 01:20:12.603422 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:33560.service. Oct 31 01:20:12.636117 sshd[1394]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:12.637459 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:12.644836 systemd[1]: Created slice user-500.slice. Oct 31 01:20:12.645682 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 01:20:12.647114 systemd-logind[1300]: New session 1 of user core. Oct 31 01:20:12.653212 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 01:20:12.654206 systemd[1]: Starting user@500.service... Oct 31 01:20:12.657664 (systemd)[1399]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:12.725858 systemd[1399]: Queued start job for default target default.target. Oct 31 01:20:12.726057 systemd[1399]: Reached target paths.target. Oct 31 01:20:12.726072 systemd[1399]: Reached target sockets.target. Oct 31 01:20:12.726083 systemd[1399]: Reached target timers.target. Oct 31 01:20:12.726093 systemd[1399]: Reached target basic.target. Oct 31 01:20:12.726132 systemd[1399]: Reached target default.target. Oct 31 01:20:12.726158 systemd[1399]: Startup finished in 62ms. Oct 31 01:20:12.726253 systemd[1]: Started user@500.service. Oct 31 01:20:12.727226 systemd[1]: Started session-1.scope. Oct 31 01:20:12.776968 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:33574.service. Oct 31 01:20:12.806658 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 33574 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:12.808000 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:12.812170 systemd-logind[1300]: New session 2 of user core. Oct 31 01:20:12.812394 systemd[1]: Started session-2.scope. Oct 31 01:20:12.866148 sshd[1408]: pam_unix(sshd:session): session closed for user core Oct 31 01:20:12.868200 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:33588.service. Oct 31 01:20:12.869873 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:33574.service: Deactivated successfully. Oct 31 01:20:12.870616 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 01:20:12.870789 systemd-logind[1300]: Session 2 logged out. Waiting for processes to exit. Oct 31 01:20:12.871552 systemd-logind[1300]: Removed session 2. Oct 31 01:20:12.900584 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 33588 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:12.902202 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:12.906378 systemd-logind[1300]: New session 3 of user core. Oct 31 01:20:12.907025 systemd[1]: Started session-3.scope. Oct 31 01:20:12.958603 sshd[1413]: pam_unix(sshd:session): session closed for user core Oct 31 01:20:12.961173 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:33592.service. Oct 31 01:20:12.961778 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:33588.service: Deactivated successfully. Oct 31 01:20:12.962784 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 01:20:12.962830 systemd-logind[1300]: Session 3 logged out. Waiting for processes to exit. Oct 31 01:20:12.963909 systemd-logind[1300]: Removed session 3. Oct 31 01:20:12.994860 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:12.996162 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:13.000363 systemd-logind[1300]: New session 4 of user core. Oct 31 01:20:13.001331 systemd[1]: Started session-4.scope. Oct 31 01:20:13.054692 sshd[1420]: pam_unix(sshd:session): session closed for user core Oct 31 01:20:13.056890 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:33606.service. Oct 31 01:20:13.057407 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:33592.service: Deactivated successfully. Oct 31 01:20:13.058211 systemd-logind[1300]: Session 4 logged out. Waiting for processes to exit. Oct 31 01:20:13.058245 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 01:20:13.058954 systemd-logind[1300]: Removed session 4. Oct 31 01:20:13.089142 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 33606 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:13.090151 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:13.093395 systemd-logind[1300]: New session 5 of user core. Oct 31 01:20:13.094090 systemd[1]: Started session-5.scope. Oct 31 01:20:13.146897 sudo[1433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 01:20:13.147079 sudo[1433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:20:13.153594 dbus-daemon[1283]: Ѝ|\xebCV: received setenforce notice (enforcing=-592374736) Oct 31 01:20:13.155431 sudo[1433]: pam_unix(sudo:session): session closed for user root Oct 31 01:20:13.156756 sshd[1427]: pam_unix(sshd:session): session closed for user core Oct 31 01:20:13.158891 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:33616.service. Oct 31 01:20:13.159781 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:33606.service: Deactivated successfully. Oct 31 01:20:13.160613 systemd-logind[1300]: Session 5 logged out. Waiting for processes to exit. Oct 31 01:20:13.160668 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 01:20:13.161451 systemd-logind[1300]: Removed session 5. Oct 31 01:20:13.189208 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 33616 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:13.190238 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:13.193489 systemd-logind[1300]: New session 6 of user core. Oct 31 01:20:13.194438 systemd[1]: Started session-6.scope. Oct 31 01:20:13.246545 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 01:20:13.246726 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:20:13.249126 sudo[1442]: pam_unix(sudo:session): session closed for user root Oct 31 01:20:13.252577 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 01:20:13.252742 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:20:13.259968 systemd[1]: Stopping audit-rules.service... Oct 31 01:20:13.259000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:20:13.261025 auditctl[1445]: No rules Oct 31 01:20:13.261497 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:20:13.261854 systemd[1]: Stopped audit-rules.service. Oct 31 01:20:13.262588 kernel: kauditd_printk_skb: 214 callbacks suppressed Oct 31 01:20:13.262625 kernel: audit: type=1305 audit(1761873613.259:135): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:20:13.263952 systemd[1]: Starting audit-rules.service... Oct 31 01:20:13.259000 audit[1445]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd006d1480 a2=420 a3=0 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:13.274504 kernel: audit: type=1300 audit(1761873613.259:135): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd006d1480 a2=420 a3=0 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:13.274559 kernel: audit: type=1327 audit(1761873613.259:135): proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:20:13.259000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:20:13.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.283359 augenrules[1463]: No rules Oct 31 01:20:13.283949 systemd[1]: Finished audit-rules.service. Oct 31 01:20:13.284700 kernel: audit: type=1131 audit(1761873613.260:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.284759 sudo[1441]: pam_unix(sudo:session): session closed for user root Oct 31 01:20:13.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.283000 audit[1441]: USER_END pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.290496 sshd[1435]: pam_unix(sshd:session): session closed for user core Oct 31 01:20:13.292210 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:33616.service: Deactivated successfully. Oct 31 01:20:13.293196 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 01:20:13.293246 systemd-logind[1300]: Session 6 logged out. Waiting for processes to exit. Oct 31 01:20:13.294161 systemd-logind[1300]: Removed session 6. Oct 31 01:20:13.296414 kernel: audit: type=1130 audit(1761873613.282:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.296471 kernel: audit: type=1106 audit(1761873613.283:138): pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.296494 kernel: audit: type=1104 audit(1761873613.283:139): pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.283000 audit[1441]: CRED_DISP pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.306337 kernel: audit: type=1106 audit(1761873613.289:140): pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.289000 audit[1435]: USER_END pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.306633 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:33622.service. Oct 31 01:20:13.289000 audit[1435]: CRED_DISP pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.319503 kernel: audit: type=1104 audit(1761873613.289:141): pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.319542 kernel: audit: type=1131 audit(1761873613.291:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.140:22-10.0.0.1:33616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.140:22-10.0.0.1:33616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:33622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.343000 audit[1470]: USER_ACCT pid=1470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.344907 sshd[1470]: Accepted publickey for core from 10.0.0.1 port 33622 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:20:13.344000 audit[1470]: CRED_ACQ pid=1470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.344000 audit[1470]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff268c5410 a2=3 a3=0 items=0 ppid=1 pid=1470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:13.344000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:20:13.345756 sshd[1470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:20:13.348695 systemd-logind[1300]: New session 7 of user core. Oct 31 01:20:13.349357 systemd[1]: Started session-7.scope. Oct 31 01:20:13.351000 audit[1470]: USER_START pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.352000 audit[1473]: CRED_ACQ pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:20:13.398000 audit[1474]: USER_ACCT pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.398000 audit[1474]: CRED_REFR pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.399951 sudo[1474]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 01:20:13.400178 sudo[1474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:20:13.400000 audit[1474]: USER_START pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:20:13.418577 systemd[1]: Starting docker.service... Oct 31 01:20:13.452160 env[1486]: time="2025-10-31T01:20:13.452106526Z" level=info msg="Starting up" Oct 31 01:20:13.453190 env[1486]: time="2025-10-31T01:20:13.453159631Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:20:13.453190 env[1486]: time="2025-10-31T01:20:13.453174449Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:20:13.453304 env[1486]: time="2025-10-31T01:20:13.453192984Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:20:13.453304 env[1486]: time="2025-10-31T01:20:13.453208563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:20:13.454714 env[1486]: time="2025-10-31T01:20:13.454697656Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:20:13.454714 env[1486]: time="2025-10-31T01:20:13.454709458Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:20:13.454801 env[1486]: time="2025-10-31T01:20:13.454719256Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:20:13.454801 env[1486]: time="2025-10-31T01:20:13.454726029Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:20:14.094920 env[1486]: time="2025-10-31T01:20:14.094859999Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 01:20:14.094920 env[1486]: time="2025-10-31T01:20:14.094890225Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 01:20:14.095149 env[1486]: time="2025-10-31T01:20:14.095127600Z" level=info msg="Loading containers: start." Oct 31 01:20:14.150000 audit[1520]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.150000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe0206e780 a2=0 a3=7ffe0206e76c items=0 ppid=1486 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 31 01:20:14.152000 audit[1522]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.152000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc9f896c0 a2=0 a3=7ffcc9f896ac items=0 ppid=1486 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 31 01:20:14.154000 audit[1524]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.154000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffdb0f83a0 a2=0 a3=7fffdb0f838c items=0 ppid=1486 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.154000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:20:14.155000 audit[1526]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.155000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff64fbe400 a2=0 a3=7fff64fbe3ec items=0 ppid=1486 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.155000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:20:14.157000 audit[1528]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.157000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc1929300 a2=0 a3=7ffcc19292ec items=0 ppid=1486 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.157000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 31 01:20:14.172000 audit[1533]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.172000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc52e361c0 a2=0 a3=7ffc52e361ac items=0 ppid=1486 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.172000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 31 01:20:14.295000 audit[1535]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.295000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff41cf7a50 a2=0 a3=7fff41cf7a3c items=0 ppid=1486 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.295000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 31 01:20:14.297000 audit[1537]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.297000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff59d362c0 a2=0 a3=7fff59d362ac items=0 ppid=1486 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.297000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 31 01:20:14.299000 audit[1539]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.299000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffccc088340 a2=0 a3=7ffccc08832c items=0 ppid=1486 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.299000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:20:14.381000 audit[1543]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.381000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc808a34f0 a2=0 a3=7ffc808a34dc items=0 ppid=1486 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.381000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:20:14.388000 audit[1544]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.388000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd1fada5d0 a2=0 a3=7ffd1fada5bc items=0 ppid=1486 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:20:14.398425 kernel: Initializing XFRM netlink socket Oct 31 01:20:14.427577 env[1486]: time="2025-10-31T01:20:14.427534171Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 01:20:14.442000 audit[1552]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.442000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd1e2b99a0 a2=0 a3=7ffd1e2b998c items=0 ppid=1486 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 31 01:20:14.452000 audit[1555]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.452000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5eda5a50 a2=0 a3=7ffd5eda5a3c items=0 ppid=1486 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.452000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 31 01:20:14.455000 audit[1558]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.455000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd6a4070a0 a2=0 a3=7ffd6a40708c items=0 ppid=1486 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.455000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 31 01:20:14.456000 audit[1560]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.456000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffff0abf570 a2=0 a3=7ffff0abf55c items=0 ppid=1486 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.456000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 31 01:20:14.458000 audit[1562]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.458000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdf8fc4e20 a2=0 a3=7ffdf8fc4e0c items=0 ppid=1486 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.458000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 31 01:20:14.460000 audit[1564]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.460000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffec57d9120 a2=0 a3=7ffec57d910c items=0 ppid=1486 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.460000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 31 01:20:14.461000 audit[1566]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.461000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdde95de10 a2=0 a3=7ffdde95ddfc items=0 ppid=1486 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.461000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 31 01:20:14.468000 audit[1569]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.468000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffffb7422b0 a2=0 a3=7ffffb74229c items=0 ppid=1486 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.468000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 31 01:20:14.470000 audit[1571]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.470000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffddc3cdfd0 a2=0 a3=7ffddc3cdfbc items=0 ppid=1486 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.470000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:20:14.471000 audit[1573]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.471000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffa88dec90 a2=0 a3=7fffa88dec7c items=0 ppid=1486 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.471000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:20:14.473000 audit[1575]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.473000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe9f85fda0 a2=0 a3=7ffe9f85fd8c items=0 ppid=1486 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.473000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 31 01:20:14.475396 systemd-networkd[1079]: docker0: Link UP Oct 31 01:20:14.528000 audit[1579]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.528000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed65ef070 a2=0 a3=7ffed65ef05c items=0 ppid=1486 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.528000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:20:14.533000 audit[1580]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:14.533000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe8ac869d0 a2=0 a3=7ffe8ac869bc items=0 ppid=1486 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:14.533000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:20:14.535326 env[1486]: time="2025-10-31T01:20:14.535295222Z" level=info msg="Loading containers: done." Oct 31 01:20:14.647802 env[1486]: time="2025-10-31T01:20:14.647681296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 01:20:14.647961 env[1486]: time="2025-10-31T01:20:14.647903402Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 01:20:14.648033 env[1486]: time="2025-10-31T01:20:14.648010483Z" level=info msg="Daemon has completed initialization" Oct 31 01:20:14.665130 systemd[1]: Started docker.service. Oct 31 01:20:14.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:14.671009 env[1486]: time="2025-10-31T01:20:14.670945685Z" level=info msg="API listen on /run/docker.sock" Oct 31 01:20:15.413118 env[1318]: time="2025-10-31T01:20:15.413071526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 01:20:16.199639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456392979.mount: Deactivated successfully. Oct 31 01:20:18.043498 env[1318]: time="2025-10-31T01:20:18.043418185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:18.045448 env[1318]: time="2025-10-31T01:20:18.045396465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:18.047485 env[1318]: time="2025-10-31T01:20:18.047436881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:18.049324 env[1318]: time="2025-10-31T01:20:18.049276140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:18.050421 env[1318]: time="2025-10-31T01:20:18.050376975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 01:20:18.051030 env[1318]: time="2025-10-31T01:20:18.051002528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 01:20:19.951826 env[1318]: time="2025-10-31T01:20:19.951717637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:20.024174 env[1318]: time="2025-10-31T01:20:20.022584348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:20.054961 env[1318]: time="2025-10-31T01:20:20.054776319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:20.058132 env[1318]: time="2025-10-31T01:20:20.058062752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:20.061762 env[1318]: time="2025-10-31T01:20:20.059635201Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 01:20:20.061762 env[1318]: time="2025-10-31T01:20:20.061675447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 01:20:20.388096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 01:20:20.391984 systemd[1]: Stopped kubelet.service. Oct 31 01:20:20.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.395872 kernel: kauditd_printk_skb: 84 callbacks suppressed Oct 31 01:20:20.395941 kernel: audit: type=1130 audit(1761873620.390:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.401423 systemd[1]: Starting kubelet.service... Oct 31 01:20:20.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.423801 kernel: audit: type=1131 audit(1761873620.390:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.661559 systemd[1]: Started kubelet.service. Oct 31 01:20:20.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.676906 kernel: audit: type=1130 audit(1761873620.661:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:20.852474 kubelet[1626]: E1031 01:20:20.852396 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:20:20.855421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:20:20.855562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:20:20.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:20.864554 kernel: audit: type=1131 audit(1761873620.854:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:22.670769 env[1318]: time="2025-10-31T01:20:22.670718134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:22.673148 env[1318]: time="2025-10-31T01:20:22.673124827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:22.675067 env[1318]: time="2025-10-31T01:20:22.675030812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:22.676870 env[1318]: time="2025-10-31T01:20:22.676844954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:22.677551 env[1318]: time="2025-10-31T01:20:22.677528205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 01:20:22.678088 env[1318]: time="2025-10-31T01:20:22.678066965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 01:20:24.646686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473317348.mount: Deactivated successfully. Oct 31 01:20:27.832429 env[1318]: time="2025-10-31T01:20:27.831346033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:27.837218 env[1318]: time="2025-10-31T01:20:27.836959891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:27.841563 env[1318]: time="2025-10-31T01:20:27.841484366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:27.844102 env[1318]: time="2025-10-31T01:20:27.843964367Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 01:20:27.850985 env[1318]: time="2025-10-31T01:20:27.847147216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 01:20:27.854938 env[1318]: time="2025-10-31T01:20:27.849949662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:28.870117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794057274.mount: Deactivated successfully. Oct 31 01:20:30.989775 env[1318]: time="2025-10-31T01:20:30.989707208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:30.991965 env[1318]: time="2025-10-31T01:20:30.991910349Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:30.994472 env[1318]: time="2025-10-31T01:20:30.994436357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:30.996766 env[1318]: time="2025-10-31T01:20:30.996726351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:30.997863 env[1318]: time="2025-10-31T01:20:30.997803000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 01:20:30.998377 env[1318]: time="2025-10-31T01:20:30.998336000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 01:20:31.106415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 01:20:31.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.106603 systemd[1]: Stopped kubelet.service. Oct 31 01:20:31.108112 systemd[1]: Starting kubelet.service... Oct 31 01:20:31.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.122046 kernel: audit: type=1130 audit(1761873631.105:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.122186 kernel: audit: type=1131 audit(1761873631.105:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.245222 systemd[1]: Started kubelet.service. Oct 31 01:20:31.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.253432 kernel: audit: type=1130 audit(1761873631.244:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:31.892120 kubelet[1643]: E1031 01:20:31.892042 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:20:31.952301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:20:31.952509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:20:31.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:31.959433 kernel: audit: type=1131 audit(1761873631.951:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:32.171925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080601675.mount: Deactivated successfully. Oct 31 01:20:32.177253 env[1318]: time="2025-10-31T01:20:32.177208897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:32.181614 env[1318]: time="2025-10-31T01:20:32.181568954Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:32.182608 env[1318]: time="2025-10-31T01:20:32.182574169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:32.184074 env[1318]: time="2025-10-31T01:20:32.184036351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:32.184615 env[1318]: time="2025-10-31T01:20:32.184586783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 01:20:32.185033 env[1318]: time="2025-10-31T01:20:32.185009867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 01:20:32.773705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956866829.mount: Deactivated successfully. Oct 31 01:20:38.443419 env[1318]: time="2025-10-31T01:20:38.443325262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:38.474486 env[1318]: time="2025-10-31T01:20:38.474439903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:38.477083 env[1318]: time="2025-10-31T01:20:38.477043706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:38.479036 env[1318]: time="2025-10-31T01:20:38.478973626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:38.479814 env[1318]: time="2025-10-31T01:20:38.479782583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 01:20:41.993790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 01:20:41.994057 systemd[1]: Stopped kubelet.service. Oct 31 01:20:41.995636 systemd[1]: Starting kubelet.service... Oct 31 01:20:41.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:41.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.006350 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 01:20:42.006440 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 01:20:42.006720 systemd[1]: Stopped kubelet.service. Oct 31 01:20:42.008886 kernel: audit: type=1130 audit(1761873641.992:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.008975 kernel: audit: type=1131 audit(1761873641.992:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.009025 kernel: audit: type=1130 audit(1761873642.005:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:42.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:20:42.009464 systemd[1]: Starting kubelet.service... Oct 31 01:20:42.033741 systemd[1]: Reloading. Oct 31 01:20:42.099721 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-10-31T01:20:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:20:42.099754 /usr/lib/systemd/system-generators/torcx-generator[1704]: time="2025-10-31T01:20:42Z" level=info msg="torcx already run" Oct 31 01:20:42.412285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:20:42.412302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:20:42.429637 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:20:42.501786 systemd[1]: Started kubelet.service. Oct 31 01:20:42.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.504458 systemd[1]: Stopping kubelet.service... Oct 31 01:20:42.504728 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:20:42.504935 systemd[1]: Stopped kubelet.service. Oct 31 01:20:42.506418 systemd[1]: Starting kubelet.service... Oct 31 01:20:42.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.512746 kernel: audit: type=1130 audit(1761873642.500:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.512830 kernel: audit: type=1131 audit(1761873642.503:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.591493 systemd[1]: Started kubelet.service. Oct 31 01:20:42.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.601410 kernel: audit: type=1130 audit(1761873642.591:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:42.737757 kubelet[1766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:20:42.737757 kubelet[1766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:20:42.737757 kubelet[1766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:20:42.738178 kubelet[1766]: I1031 01:20:42.737748 1766 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:20:42.984946 kubelet[1766]: I1031 01:20:42.984883 1766 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:20:42.984946 kubelet[1766]: I1031 01:20:42.984920 1766 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:20:42.985270 kubelet[1766]: I1031 01:20:42.985246 1766 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:20:43.011285 kubelet[1766]: E1031 01:20:43.011148 1766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:43.015257 kubelet[1766]: I1031 01:20:43.015182 1766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:20:43.111296 kubelet[1766]: E1031 01:20:43.111251 1766 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:20:43.111296 kubelet[1766]: I1031 01:20:43.111288 1766 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:20:43.115304 kubelet[1766]: I1031 01:20:43.115269 1766 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:20:43.116529 kubelet[1766]: I1031 01:20:43.116489 1766 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:20:43.116697 kubelet[1766]: I1031 01:20:43.116520 1766 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:20:43.117025 kubelet[1766]: I1031 01:20:43.116699 1766 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:20:43.117025 kubelet[1766]: I1031 01:20:43.116709 1766 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:20:43.117025 kubelet[1766]: I1031 01:20:43.116809 1766 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:20:43.119568 kubelet[1766]: I1031 01:20:43.119542 1766 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:20:43.119628 kubelet[1766]: I1031 01:20:43.119573 1766 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:20:43.119628 kubelet[1766]: I1031 01:20:43.119589 1766 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:20:43.119628 kubelet[1766]: I1031 01:20:43.119598 1766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:20:43.138233 kubelet[1766]: I1031 01:20:43.138202 1766 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:20:43.138602 kubelet[1766]: I1031 01:20:43.138584 1766 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:20:43.153224 kubelet[1766]: W1031 01:20:43.153195 1766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 01:20:43.160527 kubelet[1766]: W1031 01:20:43.160445 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:43.160527 kubelet[1766]: E1031 01:20:43.160522 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:43.160749 kubelet[1766]: W1031 01:20:43.160544 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:43.160749 kubelet[1766]: E1031 01:20:43.160589 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:43.165119 kubelet[1766]: I1031 01:20:43.165091 1766 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:20:43.165211 kubelet[1766]: I1031 01:20:43.165137 1766 server.go:1287] "Started kubelet" Oct 31 01:20:43.164000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:43.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:43.172587 kubelet[1766]: I1031 01:20:43.166227 1766 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:20:43.172587 kubelet[1766]: I1031 01:20:43.166275 1766 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:20:43.172587 kubelet[1766]: I1031 01:20:43.166346 1766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:20:43.172587 kubelet[1766]: I1031 01:20:43.170511 1766 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:20:43.172587 kubelet[1766]: I1031 01:20:43.171318 1766 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:20:43.174701 kernel: audit: type=1400 audit(1761873643.164:191): avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:43.174754 kernel: audit: type=1401 audit(1761873643.164:191): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:43.174773 kernel: audit: type=1300 audit(1761873643.164:191): arch=c000003e syscall=188 success=no exit=-22 a0=c0009a7e30 a1=c000c8c5a0 a2=c0009a7e00 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.164000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009a7e30 a1=c000c8c5a0 a2=c0009a7e00 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.181556 kubelet[1766]: I1031 01:20:43.181537 1766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:20:43.182797 kernel: audit: type=1327 audit(1761873643.164:191): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:43.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:43.186052 kubelet[1766]: I1031 01:20:43.185933 1766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:20:43.186269 kubelet[1766]: I1031 01:20:43.186232 1766 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:20:43.186327 kubelet[1766]: I1031 01:20:43.186312 1766 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:20:43.186803 kubelet[1766]: E1031 01:20:43.186771 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:43.186859 kubelet[1766]: I1031 01:20:43.186830 1766 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:20:43.186911 kubelet[1766]: I1031 01:20:43.186887 1766 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:20:43.186947 kubelet[1766]: E1031 01:20:43.186917 1766 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:20:43.187052 kubelet[1766]: I1031 01:20:43.187030 1766 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:20:43.187160 kubelet[1766]: I1031 01:20:43.187138 1766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:20:43.187649 kubelet[1766]: W1031 01:20:43.187590 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:43.187712 kubelet[1766]: E1031 01:20:43.187655 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:43.187786 kubelet[1766]: E1031 01:20:43.187759 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Oct 31 01:20:43.188188 kubelet[1766]: I1031 01:20:43.188169 1766 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:20:43.164000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:43.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:43.164000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008f26a0 a1=c000c8c5b8 a2=c0009a7ec0 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:43.165000 audit[1779]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.165000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc3ce0b910 a2=0 a3=7ffc3ce0b8fc items=0 ppid=1766 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:20:43.165000 audit[1780]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.165000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb06ca070 a2=0 a3=7fffb06ca05c items=0 ppid=1766 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:20:43.186000 audit[1782]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.186000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd23834b0 a2=0 a3=7fffd238349c items=0 ppid=1766 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:20:43.193000 audit[1784]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.193000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb861f210 a2=0 a3=7ffcb861f1fc items=0 ppid=1766 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:20:43.197765 kubelet[1766]: E1031 01:20:43.171532 1766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736ebe5ffacc21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 01:20:43.165109281 +0000 UTC m=+0.476299825,LastTimestamp:2025-10-31 01:20:43.165109281 +0000 UTC m=+0.476299825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 01:20:43.203000 audit[1789]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.203000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd354f66f0 a2=0 a3=7ffd354f66dc items=0 ppid=1766 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 31 01:20:43.206600 kubelet[1766]: I1031 01:20:43.206534 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:20:43.205000 audit[1792]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:43.205000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeff5dd2b0 a2=0 a3=7ffeff5dd29c items=0 ppid=1766 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:20:43.207697 kubelet[1766]: I1031 01:20:43.207662 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:20:43.207697 kubelet[1766]: I1031 01:20:43.207694 1766 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:20:43.207775 kubelet[1766]: I1031 01:20:43.207718 1766 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:20:43.207775 kubelet[1766]: I1031 01:20:43.207729 1766 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:20:43.207843 kubelet[1766]: E1031 01:20:43.207785 1766 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:20:43.208589 kubelet[1766]: W1031 01:20:43.208345 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:43.208589 kubelet[1766]: E1031 01:20:43.208426 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:43.207000 audit[1793]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.207000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa345e210 a2=0 a3=7fffa345e1fc items=0 ppid=1766 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:20:43.208000 audit[1794]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:43.208000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd5c309b0 a2=0 a3=7ffdd5c3099c items=0 ppid=1766 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:20:43.208000 audit[1795]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.208000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1505c7e0 a2=0 a3=7ffe1505c7cc items=0 ppid=1766 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:20:43.209000 audit[1796]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:43.209000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff2c3df250 a2=0 a3=7fff2c3df23c items=0 ppid=1766 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:20:43.209000 audit[1797]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:43.209000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4f909a30 a2=0 a3=7ffe4f909a1c items=0 ppid=1766 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:20:43.210000 audit[1798]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:43.210000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe5a995930 a2=0 a3=7ffe5a99591c items=0 ppid=1766 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:20:43.218082 kubelet[1766]: I1031 01:20:43.218054 1766 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:20:43.218082 kubelet[1766]: I1031 01:20:43.218070 1766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:20:43.218082 kubelet[1766]: I1031 01:20:43.218087 1766 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:20:43.287822 kubelet[1766]: E1031 01:20:43.287666 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:43.308003 kubelet[1766]: E1031 01:20:43.307915 1766 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:20:43.388413 kubelet[1766]: E1031 01:20:43.388340 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:43.388818 kubelet[1766]: E1031 01:20:43.388787 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Oct 31 01:20:43.489250 kubelet[1766]: E1031 01:20:43.489196 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:43.508538 kubelet[1766]: E1031 01:20:43.508450 1766 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:20:43.590271 kubelet[1766]: E1031 01:20:43.590115 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:43.659315 kubelet[1766]: I1031 01:20:43.659248 1766 policy_none.go:49] "None policy: Start" Oct 31 01:20:43.659315 kubelet[1766]: I1031 01:20:43.659293 1766 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:20:43.659315 kubelet[1766]: I1031 01:20:43.659306 1766 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:20:43.664352 kubelet[1766]: I1031 01:20:43.664325 1766 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:20:43.662000 audit[1766]: AVC avc: denied { mac_admin } for pid=1766 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:43.662000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:43.662000 audit[1766]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d540c0 a1=c000d4a960 a2=c000d54090 a3=25 items=0 ppid=1 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:43.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:43.664578 kubelet[1766]: I1031 01:20:43.664403 1766 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:20:43.664578 kubelet[1766]: I1031 01:20:43.664499 1766 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:20:43.664578 kubelet[1766]: I1031 01:20:43.664508 1766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:20:43.666539 kubelet[1766]: E1031 01:20:43.666498 1766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:20:43.666611 kubelet[1766]: E1031 01:20:43.666581 1766 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 01:20:43.666770 kubelet[1766]: I1031 01:20:43.666748 1766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:20:43.765804 kubelet[1766]: I1031 01:20:43.765758 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:43.766265 kubelet[1766]: E1031 01:20:43.766219 1766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Oct 31 01:20:43.789850 kubelet[1766]: E1031 01:20:43.789814 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Oct 31 01:20:43.914416 kubelet[1766]: E1031 01:20:43.913928 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:43.915899 kubelet[1766]: E1031 01:20:43.915855 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:43.916164 kubelet[1766]: E1031 01:20:43.916143 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:43.967798 kubelet[1766]: I1031 01:20:43.967760 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:43.968085 kubelet[1766]: E1031 01:20:43.968064 1766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Oct 31 01:20:43.991558 kubelet[1766]: I1031 01:20:43.991502 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:43.991649 kubelet[1766]: I1031 01:20:43.991573 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:43.991649 kubelet[1766]: I1031 01:20:43.991638 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:20:43.995905 kubelet[1766]: I1031 01:20:43.995871 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:43.995961 kubelet[1766]: I1031 01:20:43.995912 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:43.995961 kubelet[1766]: I1031 01:20:43.995939 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:43.995961 kubelet[1766]: I1031 01:20:43.995953 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:43.996078 kubelet[1766]: I1031 01:20:43.996004 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:43.996078 kubelet[1766]: I1031 01:20:43.996022 1766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:44.064325 kubelet[1766]: W1031 01:20:44.064259 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:44.064480 kubelet[1766]: E1031 01:20:44.064334 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:44.215147 kubelet[1766]: E1031 01:20:44.215044 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:44.215591 env[1318]: time="2025-10-31T01:20:44.215556329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c5f0e0fd5f97250a9490a6f09814a68,Namespace:kube-system,Attempt:0,}" Oct 31 01:20:44.216734 kubelet[1766]: E1031 01:20:44.216715 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:44.217266 env[1318]: time="2025-10-31T01:20:44.217238052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 01:20:44.218870 kubelet[1766]: E1031 01:20:44.218821 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:44.219452 env[1318]: time="2025-10-31T01:20:44.219416188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 01:20:44.369590 kubelet[1766]: I1031 01:20:44.369554 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:44.369906 kubelet[1766]: E1031 01:20:44.369880 1766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Oct 31 01:20:44.496302 kubelet[1766]: W1031 01:20:44.496157 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:44.496302 kubelet[1766]: E1031 01:20:44.496220 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:44.591474 kubelet[1766]: E1031 01:20:44.591282 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Oct 31 01:20:44.693718 kubelet[1766]: W1031 01:20:44.693598 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:44.693718 kubelet[1766]: E1031 01:20:44.693694 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:44.737095 kubelet[1766]: W1031 01:20:44.736988 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:44.737263 kubelet[1766]: E1031 01:20:44.737087 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:45.172459 kubelet[1766]: I1031 01:20:45.172407 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:45.172897 kubelet[1766]: E1031 01:20:45.172780 1766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Oct 31 01:20:45.195106 kubelet[1766]: E1031 01:20:45.195010 1766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:46.192583 kubelet[1766]: E1031 01:20:46.192524 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="3.2s" Oct 31 01:20:46.528995 kubelet[1766]: W1031 01:20:46.528860 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:46.528995 kubelet[1766]: E1031 01:20:46.528919 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:46.579776 kubelet[1766]: W1031 01:20:46.579712 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:46.579776 kubelet[1766]: E1031 01:20:46.579755 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:46.616837 kubelet[1766]: W1031 01:20:46.616785 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:46.616837 kubelet[1766]: E1031 01:20:46.616835 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:46.773891 kubelet[1766]: I1031 01:20:46.773858 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:46.774264 kubelet[1766]: E1031 01:20:46.774217 1766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Oct 31 01:20:47.015937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244023234.mount: Deactivated successfully. Oct 31 01:20:47.019202 env[1318]: time="2025-10-31T01:20:47.019154438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.022842 env[1318]: time="2025-10-31T01:20:47.022822257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.023980 env[1318]: time="2025-10-31T01:20:47.023956879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.024603 env[1318]: time="2025-10-31T01:20:47.024564020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.025439 env[1318]: time="2025-10-31T01:20:47.025410010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.027977 env[1318]: time="2025-10-31T01:20:47.027924633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.029253 env[1318]: time="2025-10-31T01:20:47.029213349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.030612 env[1318]: time="2025-10-31T01:20:47.030582839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.031566 env[1318]: time="2025-10-31T01:20:47.031540843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.033288 env[1318]: time="2025-10-31T01:20:47.033258771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.034600 env[1318]: time="2025-10-31T01:20:47.034579128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.038144 env[1318]: time="2025-10-31T01:20:47.038105647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:20:47.064931 env[1318]: time="2025-10-31T01:20:47.064867364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:20:47.064931 env[1318]: time="2025-10-31T01:20:47.064901410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:20:47.064931 env[1318]: time="2025-10-31T01:20:47.064910917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:20:47.070559 env[1318]: time="2025-10-31T01:20:47.070483763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0b47e2de15fc06461ff0c8b1e98a58218ac25fa1ea96fec4b5856c347fb647 pid=1810 runtime=io.containerd.runc.v2 Oct 31 01:20:47.076456 env[1318]: time="2025-10-31T01:20:47.076283943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:20:47.076456 env[1318]: time="2025-10-31T01:20:47.076313500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:20:47.076456 env[1318]: time="2025-10-31T01:20:47.076322718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:20:47.076456 env[1318]: time="2025-10-31T01:20:47.076422158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f0c33d38e1b5709639fc1f82d60b4f1ec98ad0d516cefaa1b8adc55459265eb pid=1833 runtime=io.containerd.runc.v2 Oct 31 01:20:47.088313 env[1318]: time="2025-10-31T01:20:47.088245896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:20:47.088313 env[1318]: time="2025-10-31T01:20:47.088274290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:20:47.088313 env[1318]: time="2025-10-31T01:20:47.088283298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:20:47.088594 env[1318]: time="2025-10-31T01:20:47.088565428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be28dcdfb520a93dc6e48fff4aa96bfd5c22ce6894c38fc0852a3f45fbb73826 pid=1839 runtime=io.containerd.runc.v2 Oct 31 01:20:47.100113 kubelet[1766]: W1031 01:20:47.092233 1766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Oct 31 01:20:47.100113 kubelet[1766]: E1031 01:20:47.092285 1766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:20:47.182235 env[1318]: time="2025-10-31T01:20:47.182168518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd0b47e2de15fc06461ff0c8b1e98a58218ac25fa1ea96fec4b5856c347fb647\"" Oct 31 01:20:47.183241 kubelet[1766]: E1031 01:20:47.183197 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:47.184971 env[1318]: time="2025-10-31T01:20:47.184923781Z" level=info msg="CreateContainer within sandbox \"cd0b47e2de15fc06461ff0c8b1e98a58218ac25fa1ea96fec4b5856c347fb647\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 01:20:47.206619 env[1318]: time="2025-10-31T01:20:47.206547454Z" level=info msg="CreateContainer within sandbox \"cd0b47e2de15fc06461ff0c8b1e98a58218ac25fa1ea96fec4b5856c347fb647\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f4172dd207cff992fbac23dc49064f5394e5c403dc472eecb61df2c8da32d46\"" Oct 31 01:20:47.210908 env[1318]: time="2025-10-31T01:20:47.210876109Z" level=info msg="StartContainer for \"6f4172dd207cff992fbac23dc49064f5394e5c403dc472eecb61df2c8da32d46\"" Oct 31 01:20:47.222173 env[1318]: time="2025-10-31T01:20:47.222116531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f0c33d38e1b5709639fc1f82d60b4f1ec98ad0d516cefaa1b8adc55459265eb\"" Oct 31 01:20:47.222690 kubelet[1766]: E1031 01:20:47.222662 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:47.224472 env[1318]: time="2025-10-31T01:20:47.224436791Z" level=info msg="CreateContainer within sandbox \"7f0c33d38e1b5709639fc1f82d60b4f1ec98ad0d516cefaa1b8adc55459265eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 01:20:47.227873 env[1318]: time="2025-10-31T01:20:47.227319418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c5f0e0fd5f97250a9490a6f09814a68,Namespace:kube-system,Attempt:0,} returns sandbox id \"be28dcdfb520a93dc6e48fff4aa96bfd5c22ce6894c38fc0852a3f45fbb73826\"" Oct 31 01:20:47.227959 kubelet[1766]: E1031 01:20:47.227821 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:47.229055 env[1318]: time="2025-10-31T01:20:47.229015324Z" level=info msg="CreateContainer within sandbox \"be28dcdfb520a93dc6e48fff4aa96bfd5c22ce6894c38fc0852a3f45fbb73826\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 01:20:47.246602 env[1318]: time="2025-10-31T01:20:47.246557186Z" level=info msg="CreateContainer within sandbox \"7f0c33d38e1b5709639fc1f82d60b4f1ec98ad0d516cefaa1b8adc55459265eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67a5b8e832bd0fc38dba6da48b468afc8fed6f3f4ece4fa0dc4cf7875264bdca\"" Oct 31 01:20:47.247173 env[1318]: time="2025-10-31T01:20:47.247151273Z" level=info msg="StartContainer for \"67a5b8e832bd0fc38dba6da48b468afc8fed6f3f4ece4fa0dc4cf7875264bdca\"" Oct 31 01:20:47.255950 env[1318]: time="2025-10-31T01:20:47.255887782Z" level=info msg="CreateContainer within sandbox \"be28dcdfb520a93dc6e48fff4aa96bfd5c22ce6894c38fc0852a3f45fbb73826\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4526c5f69237c8554dd06fcdde53c1c6a6c68a25edc8bc2a60a7b756ba16e6f\"" Oct 31 01:20:47.256313 env[1318]: time="2025-10-31T01:20:47.256285334Z" level=info msg="StartContainer for \"c4526c5f69237c8554dd06fcdde53c1c6a6c68a25edc8bc2a60a7b756ba16e6f\"" Oct 31 01:20:47.278028 env[1318]: time="2025-10-31T01:20:47.277855305Z" level=info msg="StartContainer for \"6f4172dd207cff992fbac23dc49064f5394e5c403dc472eecb61df2c8da32d46\" returns successfully" Oct 31 01:20:47.311375 env[1318]: time="2025-10-31T01:20:47.311321723Z" level=info msg="StartContainer for \"67a5b8e832bd0fc38dba6da48b468afc8fed6f3f4ece4fa0dc4cf7875264bdca\" returns successfully" Oct 31 01:20:47.337599 env[1318]: time="2025-10-31T01:20:47.337549148Z" level=info msg="StartContainer for \"c4526c5f69237c8554dd06fcdde53c1c6a6c68a25edc8bc2a60a7b756ba16e6f\" returns successfully" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.225249 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.225416 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.227330 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.227453 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.229405 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:48.240419 kubelet[1766]: E1031 01:20:48.229492 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:49.231418 kubelet[1766]: E1031 01:20:49.231369 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:49.231566 kubelet[1766]: E1031 01:20:49.231501 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:49.231594 kubelet[1766]: E1031 01:20:49.231557 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:49.231699 kubelet[1766]: E1031 01:20:49.231683 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:49.234242 kubelet[1766]: E1031 01:20:49.234222 1766 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 31 01:20:49.396755 kubelet[1766]: E1031 01:20:49.396714 1766 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 01:20:49.592755 kubelet[1766]: E1031 01:20:49.592640 1766 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 31 01:20:49.978187 kubelet[1766]: I1031 01:20:49.977851 1766 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:50.000150 kubelet[1766]: I1031 01:20:50.000085 1766 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:20:50.000150 kubelet[1766]: E1031 01:20:50.000144 1766 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 01:20:50.054222 kubelet[1766]: E1031 01:20:50.054182 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.155710 kubelet[1766]: E1031 01:20:50.155588 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.238790 kubelet[1766]: E1031 01:20:50.238540 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:50.238982 kubelet[1766]: E1031 01:20:50.238961 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:50.256209 kubelet[1766]: E1031 01:20:50.256129 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.360467 kubelet[1766]: E1031 01:20:50.360379 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.411757 kubelet[1766]: E1031 01:20:50.408872 1766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:20:50.411757 kubelet[1766]: E1031 01:20:50.409015 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:50.463070 kubelet[1766]: E1031 01:20:50.461329 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.562961 kubelet[1766]: E1031 01:20:50.562881 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.663578 kubelet[1766]: E1031 01:20:50.663492 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.764652 kubelet[1766]: E1031 01:20:50.764557 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.867468 kubelet[1766]: E1031 01:20:50.867285 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:50.968256 kubelet[1766]: E1031 01:20:50.968142 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:51.068668 kubelet[1766]: E1031 01:20:51.068573 1766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:51.088188 kubelet[1766]: I1031 01:20:51.087728 1766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:51.100836 kubelet[1766]: I1031 01:20:51.100713 1766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:51.113340 kubelet[1766]: I1031 01:20:51.113294 1766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:20:51.130200 kubelet[1766]: I1031 01:20:51.126169 1766 apiserver.go:52] "Watching apiserver" Oct 31 01:20:51.137293 kubelet[1766]: E1031 01:20:51.137207 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:51.187727 kubelet[1766]: I1031 01:20:51.187654 1766 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:20:51.242174 kubelet[1766]: E1031 01:20:51.242108 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:51.242515 kubelet[1766]: E1031 01:20:51.242481 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:51.553512 systemd[1]: Reloading. Oct 31 01:20:51.684982 /usr/lib/systemd/system-generators/torcx-generator[2063]: time="2025-10-31T01:20:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:20:51.686963 /usr/lib/systemd/system-generators/torcx-generator[2063]: time="2025-10-31T01:20:51Z" level=info msg="torcx already run" Oct 31 01:20:51.828919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:20:51.828944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:20:51.857118 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:20:51.989728 kubelet[1766]: I1031 01:20:51.989637 1766 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:20:51.990059 systemd[1]: Stopping kubelet.service... Oct 31 01:20:52.017457 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:20:52.017912 systemd[1]: Stopped kubelet.service. Oct 31 01:20:52.021811 kernel: kauditd_printk_skb: 44 callbacks suppressed Oct 31 01:20:52.021919 kernel: audit: type=1131 audit(1761873652.017:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:52.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:52.021338 systemd[1]: Starting kubelet.service... Oct 31 01:20:52.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:52.167808 systemd[1]: Started kubelet.service. Oct 31 01:20:52.179262 kernel: audit: type=1130 audit(1761873652.167:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:20:52.223641 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:20:52.224485 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:20:52.224485 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:20:52.226087 kubelet[2119]: I1031 01:20:52.224633 2119 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:20:52.234873 kubelet[2119]: I1031 01:20:52.234804 2119 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:20:52.234873 kubelet[2119]: I1031 01:20:52.234857 2119 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:20:52.235267 kubelet[2119]: I1031 01:20:52.235238 2119 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:20:52.236987 kubelet[2119]: I1031 01:20:52.236960 2119 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 01:20:52.240028 kubelet[2119]: I1031 01:20:52.239982 2119 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:20:52.244592 kubelet[2119]: E1031 01:20:52.244549 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:20:52.244592 kubelet[2119]: I1031 01:20:52.244593 2119 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:20:52.252726 kubelet[2119]: I1031 01:20:52.252666 2119 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:20:52.253462 kubelet[2119]: I1031 01:20:52.253416 2119 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:20:52.253660 kubelet[2119]: I1031 01:20:52.253454 2119 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:20:52.253790 kubelet[2119]: I1031 01:20:52.253666 2119 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:20:52.253790 kubelet[2119]: I1031 01:20:52.253678 2119 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:20:52.253790 kubelet[2119]: I1031 01:20:52.253729 2119 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:20:52.253911 kubelet[2119]: I1031 01:20:52.253891 2119 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:20:52.253978 kubelet[2119]: I1031 01:20:52.253926 2119 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:20:52.253978 kubelet[2119]: I1031 01:20:52.253952 2119 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:20:52.253978 kubelet[2119]: I1031 01:20:52.253965 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:20:52.255725 kubelet[2119]: I1031 01:20:52.255670 2119 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:20:52.256170 kubelet[2119]: I1031 01:20:52.256150 2119 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:20:52.256643 kubelet[2119]: I1031 01:20:52.256623 2119 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:20:52.256708 kubelet[2119]: I1031 01:20:52.256660 2119 server.go:1287] "Started kubelet" Oct 31 01:20:52.258951 kubelet[2119]: I1031 01:20:52.258905 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:20:52.259454 kubelet[2119]: I1031 01:20:52.259425 2119 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:20:52.259717 kubelet[2119]: I1031 01:20:52.259600 2119 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:20:52.265938 kubelet[2119]: I1031 01:20:52.265874 2119 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:20:52.265938 kubelet[2119]: I1031 01:20:52.265927 2119 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:20:52.265000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:52.268182 kubelet[2119]: I1031 01:20:52.265971 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:20:52.273430 kernel: audit: type=1400 audit(1761873652.265:208): avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:52.280295 kernel: audit: type=1401 audit(1761873652.265:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:52.265000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:52.281843 kubelet[2119]: I1031 01:20:52.273615 2119 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:20:52.281843 kubelet[2119]: I1031 01:20:52.274427 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:20:52.281843 kubelet[2119]: I1031 01:20:52.277377 2119 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:20:52.281843 kubelet[2119]: E1031 01:20:52.277689 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:20:52.281843 kubelet[2119]: I1031 01:20:52.277936 2119 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:20:52.281843 kubelet[2119]: I1031 01:20:52.278059 2119 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:20:52.265000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00068f830 a1=c000c6c018 a2=c00068f800 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:52.289998 kubelet[2119]: I1031 01:20:52.285136 2119 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:20:52.289998 kubelet[2119]: I1031 01:20:52.285258 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:20:52.289998 kubelet[2119]: E1031 01:20:52.289689 2119 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:20:52.293418 kernel: audit: type=1300 audit(1761873652.265:208): arch=c000003e syscall=188 success=no exit=-22 a0=c00068f830 a1=c000c6c018 a2=c00068f800 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:52.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:52.304422 kernel: audit: type=1327 audit(1761873652.265:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:52.312474 kernel: audit: type=1400 audit(1761873652.265:209): avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:52.314264 kernel: audit: type=1401 audit(1761873652.265:209): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:52.265000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:52.265000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.307037 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.308952 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.309506 2119 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.309532 2119 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.309541 2119 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:20:52.314477 kubelet[2119]: E1031 01:20:52.309710 2119 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:20:52.314477 kubelet[2119]: I1031 01:20:52.310248 2119 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:20:52.265000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0003a5040 a1=c000c6c030 a2=c00068f8c0 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:52.326699 kernel: audit: type=1300 audit(1761873652.265:209): arch=c000003e syscall=188 success=no exit=-22 a0=c0003a5040 a1=c000c6c030 a2=c00068f8c0 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:52.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:52.336349 kernel: audit: type=1327 audit(1761873652.265:209): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:52.372241 kubelet[2119]: I1031 01:20:52.372203 2119 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:20:52.372241 kubelet[2119]: I1031 01:20:52.372230 2119 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372255 2119 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372654 2119 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372667 2119 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372701 2119 policy_none.go:49] "None policy: Start" Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372710 2119 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:20:52.413717 kubelet[2119]: I1031 01:20:52.372719 2119 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:20:52.413717 kubelet[2119]: E1031 01:20:52.410541 2119 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:20:52.413886 kubelet[2119]: I1031 01:20:52.413832 2119 state_mem.go:75] "Updated machine memory state" Oct 31 01:20:52.415091 kubelet[2119]: I1031 01:20:52.415074 2119 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:20:52.415175 kubelet[2119]: I1031 01:20:52.415142 2119 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:20:52.414000 audit[2119]: AVC avc: denied { mac_admin } for pid=2119 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:20:52.414000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:20:52.414000 audit[2119]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00138b650 a1=c00138d800 a2=c00138b620 a3=25 items=0 ppid=1 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:52.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:20:52.415532 kubelet[2119]: I1031 01:20:52.415312 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:20:52.415532 kubelet[2119]: I1031 01:20:52.415334 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:20:52.415683 kubelet[2119]: I1031 01:20:52.415650 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:20:52.416774 kubelet[2119]: E1031 01:20:52.416748 2119 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:20:52.520703 kubelet[2119]: I1031 01:20:52.520600 2119 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:20:52.611345 kubelet[2119]: I1031 01:20:52.611308 2119 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:20:52.611566 kubelet[2119]: I1031 01:20:52.611416 2119 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.611566 kubelet[2119]: I1031 01:20:52.611472 2119 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:52.779753 kubelet[2119]: I1031 01:20:52.779624 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.779753 kubelet[2119]: I1031 01:20:52.779702 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.779753 kubelet[2119]: I1031 01:20:52.779723 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.780007 kubelet[2119]: I1031 01:20:52.779977 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:52.780055 kubelet[2119]: I1031 01:20:52.780008 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.780055 kubelet[2119]: I1031 01:20:52.780026 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:52.780124 kubelet[2119]: I1031 01:20:52.780093 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:20:52.780124 kubelet[2119]: I1031 01:20:52.780110 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:52.780124 kubelet[2119]: I1031 01:20:52.780122 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c5f0e0fd5f97250a9490a6f09814a68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c5f0e0fd5f97250a9490a6f09814a68\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:53.097163 kubelet[2119]: E1031 01:20:53.097133 2119 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 01:20:53.097545 kubelet[2119]: E1031 01:20:53.097511 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.097703 kubelet[2119]: E1031 01:20:53.097222 2119 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:20:53.097703 kubelet[2119]: E1031 01:20:53.097696 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.097751 kubelet[2119]: E1031 01:20:53.097186 2119 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 01:20:53.097893 kubelet[2119]: E1031 01:20:53.097845 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.255120 kubelet[2119]: I1031 01:20:53.255055 2119 apiserver.go:52] "Watching apiserver" Oct 31 01:20:53.278896 kubelet[2119]: I1031 01:20:53.278844 2119 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:20:53.330666 kubelet[2119]: I1031 01:20:53.330635 2119 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 01:20:53.330811 kubelet[2119]: I1031 01:20:53.330710 2119 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:20:53.348004 kubelet[2119]: E1031 01:20:53.347919 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.348168 kubelet[2119]: E1031 01:20:53.348150 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.348287 kubelet[2119]: E1031 01:20:53.348262 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:53.385716 kubelet[2119]: I1031 01:20:53.385527 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.385507648 podStartE2EDuration="2.385507648s" podCreationTimestamp="2025-10-31 01:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:20:53.384565526 +0000 UTC m=+1.210369781" watchObservedRunningTime="2025-10-31 01:20:53.385507648 +0000 UTC m=+1.211311903" Oct 31 01:20:53.385716 kubelet[2119]: I1031 01:20:53.385672 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.385667622 podStartE2EDuration="2.385667622s" podCreationTimestamp="2025-10-31 01:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:20:53.377768493 +0000 UTC m=+1.203572748" watchObservedRunningTime="2025-10-31 01:20:53.385667622 +0000 UTC m=+1.211471877" Oct 31 01:20:53.400158 kubelet[2119]: I1031 01:20:53.400100 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.40008743 podStartE2EDuration="2.40008743s" podCreationTimestamp="2025-10-31 01:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:20:53.392529029 +0000 UTC m=+1.218333284" watchObservedRunningTime="2025-10-31 01:20:53.40008743 +0000 UTC m=+1.225891686" Oct 31 01:20:53.433554 update_engine[1303]: I1031 01:20:53.433492 1303 update_attempter.cc:509] Updating boot flags... Oct 31 01:20:54.349108 kubelet[2119]: E1031 01:20:54.349081 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:54.349108 kubelet[2119]: E1031 01:20:54.349106 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:55.350332 kubelet[2119]: E1031 01:20:55.350285 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:56.430608 kubelet[2119]: I1031 01:20:56.430561 2119 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 01:20:56.431089 env[1318]: time="2025-10-31T01:20:56.430875161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 01:20:56.431339 kubelet[2119]: I1031 01:20:56.431084 2119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 01:20:57.316155 kubelet[2119]: I1031 01:20:57.316111 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/543069d3-26dc-4fb1-a515-977af8e48f60-kube-proxy\") pod \"kube-proxy-ht79w\" (UID: \"543069d3-26dc-4fb1-a515-977af8e48f60\") " pod="kube-system/kube-proxy-ht79w" Oct 31 01:20:57.316155 kubelet[2119]: I1031 01:20:57.316150 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543069d3-26dc-4fb1-a515-977af8e48f60-lib-modules\") pod \"kube-proxy-ht79w\" (UID: \"543069d3-26dc-4fb1-a515-977af8e48f60\") " pod="kube-system/kube-proxy-ht79w" Oct 31 01:20:57.316155 kubelet[2119]: I1031 01:20:57.316172 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543069d3-26dc-4fb1-a515-977af8e48f60-xtables-lock\") pod \"kube-proxy-ht79w\" (UID: \"543069d3-26dc-4fb1-a515-977af8e48f60\") " pod="kube-system/kube-proxy-ht79w" Oct 31 01:20:57.316402 kubelet[2119]: I1031 01:20:57.316194 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkkh\" (UniqueName: \"kubernetes.io/projected/543069d3-26dc-4fb1-a515-977af8e48f60-kube-api-access-znkkh\") pod \"kube-proxy-ht79w\" (UID: \"543069d3-26dc-4fb1-a515-977af8e48f60\") " pod="kube-system/kube-proxy-ht79w" Oct 31 01:20:57.423259 kubelet[2119]: I1031 01:20:57.423202 2119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 01:20:57.517635 kubelet[2119]: I1031 01:20:57.517571 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71b29bcd-2b06-4030-b227-b044d0c5cdfc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9pcr7\" (UID: \"71b29bcd-2b06-4030-b227-b044d0c5cdfc\") " pod="tigera-operator/tigera-operator-7dcd859c48-9pcr7" Oct 31 01:20:57.517635 kubelet[2119]: I1031 01:20:57.517611 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmn2\" (UniqueName: \"kubernetes.io/projected/71b29bcd-2b06-4030-b227-b044d0c5cdfc-kube-api-access-tbmn2\") pod \"tigera-operator-7dcd859c48-9pcr7\" (UID: \"71b29bcd-2b06-4030-b227-b044d0c5cdfc\") " pod="tigera-operator/tigera-operator-7dcd859c48-9pcr7" Oct 31 01:20:57.567617 kubelet[2119]: E1031 01:20:57.567502 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:57.568147 env[1318]: time="2025-10-31T01:20:57.568111080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht79w,Uid:543069d3-26dc-4fb1-a515-977af8e48f60,Namespace:kube-system,Attempt:0,}" Oct 31 01:20:57.583139 env[1318]: time="2025-10-31T01:20:57.583067591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:20:57.583139 env[1318]: time="2025-10-31T01:20:57.583116113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:20:57.583304 env[1318]: time="2025-10-31T01:20:57.583137313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:20:57.583404 env[1318]: time="2025-10-31T01:20:57.583332553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0450b85b30d226c46059eb28d7c007b956220d0c047ac900abf3283bfc52ef18 pid=2191 runtime=io.containerd.runc.v2 Oct 31 01:20:57.616343 env[1318]: time="2025-10-31T01:20:57.616290205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht79w,Uid:543069d3-26dc-4fb1-a515-977af8e48f60,Namespace:kube-system,Attempt:0,} returns sandbox id \"0450b85b30d226c46059eb28d7c007b956220d0c047ac900abf3283bfc52ef18\"" Oct 31 01:20:57.616978 kubelet[2119]: E1031 01:20:57.616934 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:57.618715 env[1318]: time="2025-10-31T01:20:57.618680917Z" level=info msg="CreateContainer within sandbox \"0450b85b30d226c46059eb28d7c007b956220d0c047ac900abf3283bfc52ef18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 01:20:57.640487 env[1318]: time="2025-10-31T01:20:57.640427559Z" level=info msg="CreateContainer within sandbox \"0450b85b30d226c46059eb28d7c007b956220d0c047ac900abf3283bfc52ef18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bccbbc69a1880124873a8b5ee245a2e610337279c296080ec8f19791a770143f\"" Oct 31 01:20:57.640955 env[1318]: time="2025-10-31T01:20:57.640930121Z" level=info msg="StartContainer for \"bccbbc69a1880124873a8b5ee245a2e610337279c296080ec8f19791a770143f\"" Oct 31 01:20:57.684676 env[1318]: time="2025-10-31T01:20:57.684626248Z" level=info msg="StartContainer for \"bccbbc69a1880124873a8b5ee245a2e610337279c296080ec8f19791a770143f\" returns successfully" Oct 31 01:20:57.733022 env[1318]: time="2025-10-31T01:20:57.732970556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9pcr7,Uid:71b29bcd-2b06-4030-b227-b044d0c5cdfc,Namespace:tigera-operator,Attempt:0,}" Oct 31 01:20:57.748194 env[1318]: time="2025-10-31T01:20:57.748121986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:20:57.748194 env[1318]: time="2025-10-31T01:20:57.748155600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:20:57.748194 env[1318]: time="2025-10-31T01:20:57.748164847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:20:57.748413 env[1318]: time="2025-10-31T01:20:57.748284785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51406edb7a3dd5e2e2ac3bd052fbe6f81f34c293564d7b36ca2205995b8da8b2 pid=2275 runtime=io.containerd.runc.v2 Oct 31 01:20:57.788325 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 31 01:20:57.788462 kernel: audit: type=1325 audit(1761873657.781:211): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.781000 audit[2327]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.797420 kernel: audit: type=1300 audit(1761873657.781:211): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5967feb0 a2=0 a3=7ffe5967fe9c items=0 ppid=2244 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.781000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5967feb0 a2=0 a3=7ffe5967fe9c items=0 ppid=2244 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:20:57.786000 audit[2329]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.805412 kernel: audit: type=1327 audit(1761873657.781:211): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:20:57.805459 kernel: audit: type=1325 audit(1761873657.786:212): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.805501 kernel: audit: type=1300 audit(1761873657.786:212): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc968880 a2=0 a3=7ffdbc96886c items=0 ppid=2244 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.786000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc968880 a2=0 a3=7ffdbc96886c items=0 ppid=2244 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.810373 env[1318]: time="2025-10-31T01:20:57.810320816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9pcr7,Uid:71b29bcd-2b06-4030-b227-b044d0c5cdfc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"51406edb7a3dd5e2e2ac3bd052fbe6f81f34c293564d7b36ca2205995b8da8b2\"" Oct 31 01:20:57.812170 env[1318]: time="2025-10-31T01:20:57.812135155Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 01:20:57.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:20:57.817018 kernel: audit: type=1327 audit(1761873657.786:212): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:20:57.817062 kernel: audit: type=1325 audit(1761873657.787:213): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.787000 audit[2330]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.787000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1c2ed570 a2=0 a3=7ffe1c2ed55c items=0 ppid=2244 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.828574 kernel: audit: type=1300 audit(1761873657.787:213): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1c2ed570 a2=0 a3=7ffe1c2ed55c items=0 ppid=2244 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:20:57.832518 kernel: audit: type=1327 audit(1761873657.787:213): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:20:57.832551 kernel: audit: type=1325 audit(1761873657.788:214): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.788000 audit[2326]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.788000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8f2a07f0 a2=0 a3=7ffe8f2a07dc items=0 ppid=2244 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:20:57.790000 audit[2333]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.790000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee3501e10 a2=0 a3=7ffee3501dfc items=0 ppid=2244 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:20:57.791000 audit[2334]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.791000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2f3518b0 a2=0 a3=7ffc2f35189c items=0 ppid=2244 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:20:57.884000 audit[2342]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.884000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff9ac6ebc0 a2=0 a3=7fff9ac6ebac items=0 ppid=2244 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:20:57.886000 audit[2344]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.886000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd19c97720 a2=0 a3=7ffd19c9770c items=0 ppid=2244 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 31 01:20:57.889000 audit[2347]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.889000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd063a5840 a2=0 a3=7ffd063a582c items=0 ppid=2244 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 31 01:20:57.890000 audit[2348]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.890000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc0e05620 a2=0 a3=7fffc0e0560c items=0 ppid=2244 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:20:57.892000 audit[2350]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.892000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda35a8440 a2=0 a3=7ffda35a842c items=0 ppid=2244 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:20:57.893000 audit[2351]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.893000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6b32e820 a2=0 a3=7fff6b32e80c items=0 ppid=2244 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:20:57.895000 audit[2353]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.895000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffec502de0 a2=0 a3=7fffec502dcc items=0 ppid=2244 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:20:57.898000 audit[2356]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.898000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc424f0210 a2=0 a3=7ffc424f01fc items=0 ppid=2244 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 31 01:20:57.899000 audit[2357]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.899000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaffc0280 a2=0 a3=7ffeaffc026c items=0 ppid=2244 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:20:57.901000 audit[2359]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.901000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd295fc50 a2=0 a3=7ffcd295fc3c items=0 ppid=2244 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:20:57.902000 audit[2360]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.902000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5011da00 a2=0 a3=7ffe5011d9ec items=0 ppid=2244 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:20:57.904000 audit[2362]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.904000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe03374130 a2=0 a3=7ffe0337411c items=0 ppid=2244 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:20:57.907000 audit[2365]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.907000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdba60f240 a2=0 a3=7ffdba60f22c items=0 ppid=2244 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:20:57.910000 audit[2368]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.910000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd61997140 a2=0 a3=7ffd6199712c items=0 ppid=2244 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:20:57.911000 audit[2369]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.911000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff15d20f80 a2=0 a3=7fff15d20f6c items=0 ppid=2244 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:20:57.913000 audit[2371]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.913000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc93307b70 a2=0 a3=7ffc93307b5c items=0 ppid=2244 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:20:57.916000 audit[2374]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.916000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3eec2020 a2=0 a3=7fff3eec200c items=0 ppid=2244 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:20:57.916000 audit[2375]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.916000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd66cf2f70 a2=0 a3=7ffd66cf2f5c items=0 ppid=2244 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:20:57.918000 audit[2377]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:20:57.918000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff2a0f59b0 a2=0 a3=7fff2a0f599c items=0 ppid=2244 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:20:57.938000 audit[2383]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:20:57.938000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe82633240 a2=0 a3=7ffe8263322c items=0 ppid=2244 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:20:57.947000 audit[2383]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:20:57.947000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe82633240 a2=0 a3=7ffe8263322c items=0 ppid=2244 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:20:57.948000 audit[2388]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.948000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe2d9ee900 a2=0 a3=7ffe2d9ee8ec items=0 ppid=2244 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:20:57.950000 audit[2390]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.950000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd07cf05b0 a2=0 a3=7ffd07cf059c items=0 ppid=2244 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 31 01:20:57.954000 audit[2393]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.954000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc8f37c870 a2=0 a3=7ffc8f37c85c items=0 ppid=2244 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 31 01:20:57.955000 audit[2394]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.955000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f090500 a2=0 a3=7ffc3f0904ec items=0 ppid=2244 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:20:57.957000 audit[2396]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.957000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfdb21370 a2=0 a3=7ffdfdb2135c items=0 ppid=2244 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:20:57.958000 audit[2397]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.958000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff391c9010 a2=0 a3=7fff391c8ffc items=0 ppid=2244 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:20:57.960000 audit[2399]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.960000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1c61f2a0 a2=0 a3=7ffc1c61f28c items=0 ppid=2244 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 31 01:20:57.962000 audit[2402]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.962000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcf817c620 a2=0 a3=7ffcf817c60c items=0 ppid=2244 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:20:57.963000 audit[2403]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.963000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4bcbc0b0 a2=0 a3=7fff4bcbc09c items=0 ppid=2244 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:20:57.965000 audit[2405]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.965000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe66230320 a2=0 a3=7ffe6623030c items=0 ppid=2244 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:20:57.966000 audit[2406]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.966000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc088d2d0 a2=0 a3=7fffc088d2bc items=0 ppid=2244 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:20:57.968000 audit[2408]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.968000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1e592ef0 a2=0 a3=7ffe1e592edc items=0 ppid=2244 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:20:57.971000 audit[2411]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.971000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda2ee6bf0 a2=0 a3=7ffda2ee6bdc items=0 ppid=2244 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:20:57.973000 audit[2414]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.973000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde9749ed0 a2=0 a3=7ffde9749ebc items=0 ppid=2244 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 31 01:20:57.974000 audit[2415]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.974000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd78aa1050 a2=0 a3=7ffd78aa103c items=0 ppid=2244 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:20:57.976000 audit[2417]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.976000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffe1b848c0 a2=0 a3=7fffe1b848ac items=0 ppid=2244 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:20:57.978000 audit[2420]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.978000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffef637500 a2=0 a3=7fffef6374ec items=0 ppid=2244 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:20:57.979000 audit[2421]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.979000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6a2021b0 a2=0 a3=7ffc6a20219c items=0 ppid=2244 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:20:57.981000 audit[2423]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.981000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd6d1a7990 a2=0 a3=7ffd6d1a797c items=0 ppid=2244 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:20:57.982000 audit[2424]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.982000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2420cd70 a2=0 a3=7ffc2420cd5c items=0 ppid=2244 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:20:57.984000 audit[2426]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.984000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdfdaa3d80 a2=0 a3=7ffdfdaa3d6c items=0 ppid=2244 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:20:57.987000 audit[2429]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:20:57.987000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe87a7e110 a2=0 a3=7ffe87a7e0fc items=0 ppid=2244 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:20:57.989000 audit[2431]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:20:57.989000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff14965ac0 a2=0 a3=7fff14965aac items=0 ppid=2244 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.989000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:20:57.989000 audit[2431]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:20:57.989000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff14965ac0 a2=0 a3=7fff14965aac items=0 ppid=2244 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:20:57.989000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:20:58.357018 kubelet[2119]: E1031 01:20:58.356991 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:58.364692 kubelet[2119]: I1031 01:20:58.364631 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ht79w" podStartSLOduration=1.364613235 podStartE2EDuration="1.364613235s" podCreationTimestamp="2025-10-31 01:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:20:58.364379321 +0000 UTC m=+6.190183576" watchObservedRunningTime="2025-10-31 01:20:58.364613235 +0000 UTC m=+6.190417490" Oct 31 01:20:58.436286 kubelet[2119]: E1031 01:20:58.436207 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:20:59.358623 kubelet[2119]: E1031 01:20:59.358597 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:00.360462 kubelet[2119]: E1031 01:21:00.360373 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:00.512170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426218592.mount: Deactivated successfully. Oct 31 01:21:01.250455 env[1318]: time="2025-10-31T01:21:01.250404164Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:01.252245 env[1318]: time="2025-10-31T01:21:01.252193668Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:01.253671 env[1318]: time="2025-10-31T01:21:01.253631227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:01.254922 env[1318]: time="2025-10-31T01:21:01.254883965Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:01.255364 env[1318]: time="2025-10-31T01:21:01.255328205Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 01:21:01.260149 env[1318]: time="2025-10-31T01:21:01.260109858Z" level=info msg="CreateContainer within sandbox \"51406edb7a3dd5e2e2ac3bd052fbe6f81f34c293564d7b36ca2205995b8da8b2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 01:21:01.269788 env[1318]: time="2025-10-31T01:21:01.269736511Z" level=info msg="CreateContainer within sandbox \"51406edb7a3dd5e2e2ac3bd052fbe6f81f34c293564d7b36ca2205995b8da8b2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b33fc642c1ddc7e5a44c5a99a936a7693de5b30f48c11f3cfd64a41fe106faa4\"" Oct 31 01:21:01.270228 env[1318]: time="2025-10-31T01:21:01.270191521Z" level=info msg="StartContainer for \"b33fc642c1ddc7e5a44c5a99a936a7693de5b30f48c11f3cfd64a41fe106faa4\"" Oct 31 01:21:01.306163 env[1318]: time="2025-10-31T01:21:01.306111804Z" level=info msg="StartContainer for \"b33fc642c1ddc7e5a44c5a99a936a7693de5b30f48c11f3cfd64a41fe106faa4\" returns successfully" Oct 31 01:21:01.789623 kubelet[2119]: E1031 01:21:01.789583 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:01.802631 kubelet[2119]: I1031 01:21:01.802582 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9pcr7" podStartSLOduration=1.357924607 podStartE2EDuration="4.802561981s" podCreationTimestamp="2025-10-31 01:20:57 +0000 UTC" firstStartedPulling="2025-10-31 01:20:57.811420932 +0000 UTC m=+5.637225177" lastFinishedPulling="2025-10-31 01:21:01.256058296 +0000 UTC m=+9.081862551" observedRunningTime="2025-10-31 01:21:01.373596353 +0000 UTC m=+9.199400608" watchObservedRunningTime="2025-10-31 01:21:01.802561981 +0000 UTC m=+9.628366236" Oct 31 01:21:02.366148 kubelet[2119]: E1031 01:21:02.366101 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:04.323248 kubelet[2119]: E1031 01:21:04.323201 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:04.368274 kubelet[2119]: E1031 01:21:04.368224 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:08.030338 sudo[1474]: pam_unix(sudo:session): session closed for user root Oct 31 01:21:08.039600 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 31 01:21:08.039666 kernel: audit: type=1106 audit(1761873668.030:262): pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.030000 audit[1474]: USER_END pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.046068 kernel: audit: type=1104 audit(1761873668.030:263): pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.030000 audit[1474]: CRED_DISP pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.047845 sshd[1470]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:08.050059 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:33622.service: Deactivated successfully. Oct 31 01:21:08.050807 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 01:21:08.048000 audit[1470]: USER_END pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:08.051510 systemd-logind[1300]: Session 7 logged out. Waiting for processes to exit. Oct 31 01:21:08.052620 systemd-logind[1300]: Removed session 7. Oct 31 01:21:08.059402 kernel: audit: type=1106 audit(1761873668.048:264): pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:08.048000 audit[1470]: CRED_DISP pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:08.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:33622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.072686 kernel: audit: type=1104 audit(1761873668.048:265): pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:08.072758 kernel: audit: type=1131 audit(1761873668.049:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.140:22-10.0.0.1:33622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:08.443000 audit[2526]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.458072 kernel: audit: type=1325 audit(1761873668.443:267): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.458229 kernel: audit: type=1300 audit(1761873668.443:267): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe101a7770 a2=0 a3=7ffe101a775c items=0 ppid=2244 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.443000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe101a7770 a2=0 a3=7ffe101a775c items=0 ppid=2244 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.443000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:08.463404 kernel: audit: type=1327 audit(1761873668.443:267): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:08.463000 audit[2526]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.468403 kernel: audit: type=1325 audit(1761873668.463:268): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.463000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe101a7770 a2=0 a3=0 items=0 ppid=2244 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:08.478421 kernel: audit: type=1300 audit(1761873668.463:268): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe101a7770 a2=0 a3=0 items=0 ppid=2244 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.483000 audit[2528]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.483000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff67a9ce30 a2=0 a3=7fff67a9ce1c items=0 ppid=2244 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:08.488000 audit[2528]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:08.488000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff67a9ce30 a2=0 a3=0 items=0 ppid=2244 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:08.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:10.209000 audit[2530]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:10.209000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd263d940 a2=0 a3=7ffdd263d92c items=0 ppid=2244 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:10.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:10.215000 audit[2530]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:10.215000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd263d940 a2=0 a3=0 items=0 ppid=2244 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:10.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:11.225000 audit[2532]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:11.225000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdfb336e70 a2=0 a3=7ffdfb336e5c items=0 ppid=2244 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:11.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:11.230000 audit[2532]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:11.230000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfb336e70 a2=0 a3=0 items=0 ppid=2244 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:11.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:12.169000 audit[2534]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:12.169000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc63a4c100 a2=0 a3=7ffc63a4c0ec items=0 ppid=2244 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:12.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:12.174000 audit[2534]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:12.174000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc63a4c100 a2=0 a3=0 items=0 ppid=2244 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:12.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:12.315340 kubelet[2119]: I1031 01:21:12.315296 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40c64caa-02a7-4100-b518-fee6eb00d4b3-tigera-ca-bundle\") pod \"calico-typha-d699fdd5d-6nmqk\" (UID: \"40c64caa-02a7-4100-b518-fee6eb00d4b3\") " pod="calico-system/calico-typha-d699fdd5d-6nmqk" Oct 31 01:21:12.315340 kubelet[2119]: I1031 01:21:12.315328 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wcjb\" (UniqueName: \"kubernetes.io/projected/40c64caa-02a7-4100-b518-fee6eb00d4b3-kube-api-access-6wcjb\") pod \"calico-typha-d699fdd5d-6nmqk\" (UID: \"40c64caa-02a7-4100-b518-fee6eb00d4b3\") " pod="calico-system/calico-typha-d699fdd5d-6nmqk" Oct 31 01:21:12.315340 kubelet[2119]: I1031 01:21:12.315343 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/40c64caa-02a7-4100-b518-fee6eb00d4b3-typha-certs\") pod \"calico-typha-d699fdd5d-6nmqk\" (UID: \"40c64caa-02a7-4100-b518-fee6eb00d4b3\") " pod="calico-system/calico-typha-d699fdd5d-6nmqk" Oct 31 01:21:12.516233 kubelet[2119]: I1031 01:21:12.516112 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-node-certs\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516233 kubelet[2119]: I1031 01:21:12.516154 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-var-lib-calico\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516233 kubelet[2119]: I1031 01:21:12.516171 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-cni-bin-dir\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516233 kubelet[2119]: I1031 01:21:12.516185 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-tigera-ca-bundle\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516233 kubelet[2119]: I1031 01:21:12.516199 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-var-run-calico\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516589 kubelet[2119]: I1031 01:21:12.516211 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-cni-log-dir\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516589 kubelet[2119]: I1031 01:21:12.516223 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-lib-modules\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516589 kubelet[2119]: I1031 01:21:12.516235 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-policysync\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516589 kubelet[2119]: I1031 01:21:12.516250 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-flexvol-driver-host\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516589 kubelet[2119]: I1031 01:21:12.516264 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-xtables-lock\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516755 kubelet[2119]: I1031 01:21:12.516277 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-cni-net-dir\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.516755 kubelet[2119]: I1031 01:21:12.516292 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8vm\" (UniqueName: \"kubernetes.io/projected/06b5b78c-8b24-4a41-98ea-2d3e8b323f01-kube-api-access-gb8vm\") pod \"calico-node-qwlfm\" (UID: \"06b5b78c-8b24-4a41-98ea-2d3e8b323f01\") " pod="calico-system/calico-node-qwlfm" Oct 31 01:21:12.581588 kubelet[2119]: E1031 01:21:12.581565 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:12.581936 env[1318]: time="2025-10-31T01:21:12.581903196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d699fdd5d-6nmqk,Uid:40c64caa-02a7-4100-b518-fee6eb00d4b3,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:12.617966 kubelet[2119]: E1031 01:21:12.617935 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:12.617966 kubelet[2119]: W1031 01:21:12.617959 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:12.618144 kubelet[2119]: E1031 01:21:12.617992 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:12.619581 kubelet[2119]: E1031 01:21:12.619565 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:12.619581 kubelet[2119]: W1031 01:21:12.619579 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:12.619681 kubelet[2119]: E1031 01:21:12.619591 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:12.749641 kubelet[2119]: E1031 01:21:12.749614 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:12.749641 kubelet[2119]: W1031 01:21:12.749632 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:12.749820 kubelet[2119]: E1031 01:21:12.749665 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.038516 kubelet[2119]: E1031 01:21:13.038456 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:13.039210 env[1318]: time="2025-10-31T01:21:13.038989398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwlfm,Uid:06b5b78c-8b24-4a41-98ea-2d3e8b323f01,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:13.192437 kernel: kauditd_printk_skb: 25 callbacks suppressed Oct 31 01:21:13.192668 kernel: audit: type=1325 audit(1761873673.183:277): table=filter:99 family=2 entries=22 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:13.192714 kernel: audit: type=1300 audit(1761873673.183:277): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff998af790 a2=0 a3=7fff998af77c items=0 ppid=2244 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:13.183000 audit[2543]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:13.183000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff998af790 a2=0 a3=7fff998af77c items=0 ppid=2244 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:13.204547 kernel: audit: type=1327 audit(1761873673.183:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:13.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:13.200000 audit[2543]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:13.208788 kernel: audit: type=1325 audit(1761873673.200:278): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:13.208871 kernel: audit: type=1300 audit(1761873673.200:278): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff998af790 a2=0 a3=0 items=0 ppid=2244 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:13.200000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff998af790 a2=0 a3=0 items=0 ppid=2244 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:13.216692 kernel: audit: type=1327 audit(1761873673.200:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:13.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:13.328002 kubelet[2119]: E1031 01:21:13.327952 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:13.330321 kubelet[2119]: I1031 01:21:13.330276 2119 status_manager.go:890] "Failed to get status for pod" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" pod="calico-system/csi-node-driver-b9l4v" err="pods \"csi-node-driver-b9l4v\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Oct 31 01:21:13.409910 kubelet[2119]: E1031 01:21:13.409870 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.409910 kubelet[2119]: W1031 01:21:13.409892 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410119 kubelet[2119]: E1031 01:21:13.409918 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.410119 kubelet[2119]: E1031 01:21:13.410060 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.410119 kubelet[2119]: W1031 01:21:13.410070 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410119 kubelet[2119]: E1031 01:21:13.410079 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.410263 kubelet[2119]: E1031 01:21:13.410219 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.410263 kubelet[2119]: W1031 01:21:13.410229 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410263 kubelet[2119]: E1031 01:21:13.410238 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.410448 kubelet[2119]: E1031 01:21:13.410432 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.410448 kubelet[2119]: W1031 01:21:13.410443 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410565 kubelet[2119]: E1031 01:21:13.410459 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.410649 kubelet[2119]: E1031 01:21:13.410634 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.410649 kubelet[2119]: W1031 01:21:13.410645 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410753 kubelet[2119]: E1031 01:21:13.410655 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.410878 kubelet[2119]: E1031 01:21:13.410861 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.410878 kubelet[2119]: W1031 01:21:13.410872 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.410976 kubelet[2119]: E1031 01:21:13.410881 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411028 kubelet[2119]: E1031 01:21:13.411017 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411028 kubelet[2119]: W1031 01:21:13.411025 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411087 kubelet[2119]: E1031 01:21:13.411032 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411157 kubelet[2119]: E1031 01:21:13.411148 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411157 kubelet[2119]: W1031 01:21:13.411155 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411230 kubelet[2119]: E1031 01:21:13.411161 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411286 kubelet[2119]: E1031 01:21:13.411277 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411286 kubelet[2119]: W1031 01:21:13.411283 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411355 kubelet[2119]: E1031 01:21:13.411290 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411415 kubelet[2119]: E1031 01:21:13.411410 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411453 kubelet[2119]: W1031 01:21:13.411416 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411453 kubelet[2119]: E1031 01:21:13.411423 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411553 kubelet[2119]: E1031 01:21:13.411529 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411553 kubelet[2119]: W1031 01:21:13.411537 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411553 kubelet[2119]: E1031 01:21:13.411543 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411699 kubelet[2119]: E1031 01:21:13.411689 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411699 kubelet[2119]: W1031 01:21:13.411696 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411776 kubelet[2119]: E1031 01:21:13.411702 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411847 kubelet[2119]: E1031 01:21:13.411837 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411847 kubelet[2119]: W1031 01:21:13.411844 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.411911 kubelet[2119]: E1031 01:21:13.411850 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.411958 kubelet[2119]: E1031 01:21:13.411949 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.411958 kubelet[2119]: W1031 01:21:13.411956 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412021 kubelet[2119]: E1031 01:21:13.411962 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412093 kubelet[2119]: E1031 01:21:13.412083 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412093 kubelet[2119]: W1031 01:21:13.412089 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412161 kubelet[2119]: E1031 01:21:13.412095 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412212 kubelet[2119]: E1031 01:21:13.412203 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412212 kubelet[2119]: W1031 01:21:13.412210 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412270 kubelet[2119]: E1031 01:21:13.412216 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412358 kubelet[2119]: E1031 01:21:13.412348 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412358 kubelet[2119]: W1031 01:21:13.412355 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412453 kubelet[2119]: E1031 01:21:13.412361 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412492 kubelet[2119]: E1031 01:21:13.412481 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412492 kubelet[2119]: W1031 01:21:13.412487 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412551 kubelet[2119]: E1031 01:21:13.412493 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412623 kubelet[2119]: E1031 01:21:13.412613 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412623 kubelet[2119]: W1031 01:21:13.412620 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412692 kubelet[2119]: E1031 01:21:13.412630 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.412748 kubelet[2119]: E1031 01:21:13.412739 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.412748 kubelet[2119]: W1031 01:21:13.412746 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.412858 kubelet[2119]: E1031 01:21:13.412753 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.421812 kubelet[2119]: E1031 01:21:13.421793 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.421812 kubelet[2119]: W1031 01:21:13.421806 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.421910 kubelet[2119]: E1031 01:21:13.421816 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.421910 kubelet[2119]: I1031 01:21:13.421839 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9ef33ba9-4950-4b3a-9079-7b7964e46235-socket-dir\") pod \"csi-node-driver-b9l4v\" (UID: \"9ef33ba9-4950-4b3a-9079-7b7964e46235\") " pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:13.422042 kubelet[2119]: E1031 01:21:13.422028 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.422042 kubelet[2119]: W1031 01:21:13.422041 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.422114 kubelet[2119]: E1031 01:21:13.422055 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.422114 kubelet[2119]: I1031 01:21:13.422072 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ef33ba9-4950-4b3a-9079-7b7964e46235-kubelet-dir\") pod \"csi-node-driver-b9l4v\" (UID: \"9ef33ba9-4950-4b3a-9079-7b7964e46235\") " pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:13.422259 kubelet[2119]: E1031 01:21:13.422242 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.422259 kubelet[2119]: W1031 01:21:13.422255 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.422343 kubelet[2119]: E1031 01:21:13.422272 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.422343 kubelet[2119]: I1031 01:21:13.422290 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ef33ba9-4950-4b3a-9079-7b7964e46235-registration-dir\") pod \"csi-node-driver-b9l4v\" (UID: \"9ef33ba9-4950-4b3a-9079-7b7964e46235\") " pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:13.422464 kubelet[2119]: E1031 01:21:13.422449 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.422464 kubelet[2119]: W1031 01:21:13.422462 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.422542 kubelet[2119]: E1031 01:21:13.422479 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.422655 kubelet[2119]: E1031 01:21:13.422639 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.422655 kubelet[2119]: W1031 01:21:13.422650 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.422743 kubelet[2119]: E1031 01:21:13.422673 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.422848 kubelet[2119]: E1031 01:21:13.422832 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.422848 kubelet[2119]: W1031 01:21:13.422843 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.422942 kubelet[2119]: E1031 01:21:13.422864 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423022 kubelet[2119]: E1031 01:21:13.423008 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423022 kubelet[2119]: W1031 01:21:13.423020 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423084 kubelet[2119]: E1031 01:21:13.423034 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423197 kubelet[2119]: E1031 01:21:13.423183 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423197 kubelet[2119]: W1031 01:21:13.423192 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423278 kubelet[2119]: E1031 01:21:13.423203 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423278 kubelet[2119]: I1031 01:21:13.423220 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2s8\" (UniqueName: \"kubernetes.io/projected/9ef33ba9-4950-4b3a-9079-7b7964e46235-kube-api-access-4r2s8\") pod \"csi-node-driver-b9l4v\" (UID: \"9ef33ba9-4950-4b3a-9079-7b7964e46235\") " pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:13.423398 kubelet[2119]: E1031 01:21:13.423369 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423398 kubelet[2119]: W1031 01:21:13.423395 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423481 kubelet[2119]: E1031 01:21:13.423409 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423481 kubelet[2119]: I1031 01:21:13.423424 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9ef33ba9-4950-4b3a-9079-7b7964e46235-varrun\") pod \"csi-node-driver-b9l4v\" (UID: \"9ef33ba9-4950-4b3a-9079-7b7964e46235\") " pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:13.423599 kubelet[2119]: E1031 01:21:13.423584 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423599 kubelet[2119]: W1031 01:21:13.423596 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423678 kubelet[2119]: E1031 01:21:13.423620 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423760 kubelet[2119]: E1031 01:21:13.423744 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423760 kubelet[2119]: W1031 01:21:13.423755 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423876 kubelet[2119]: E1031 01:21:13.423795 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.423928 kubelet[2119]: E1031 01:21:13.423914 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.423928 kubelet[2119]: W1031 01:21:13.423925 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.423990 kubelet[2119]: E1031 01:21:13.423937 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.424124 kubelet[2119]: E1031 01:21:13.424109 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.424124 kubelet[2119]: W1031 01:21:13.424121 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.424206 kubelet[2119]: E1031 01:21:13.424130 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.424291 kubelet[2119]: E1031 01:21:13.424279 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.424291 kubelet[2119]: W1031 01:21:13.424289 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.424353 kubelet[2119]: E1031 01:21:13.424297 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.424465 kubelet[2119]: E1031 01:21:13.424454 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.424505 kubelet[2119]: W1031 01:21:13.424465 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.424505 kubelet[2119]: E1031 01:21:13.424473 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.524466 kubelet[2119]: E1031 01:21:13.524422 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.524466 kubelet[2119]: W1031 01:21:13.524443 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.524466 kubelet[2119]: E1031 01:21:13.524466 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.524674 kubelet[2119]: E1031 01:21:13.524658 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.524674 kubelet[2119]: W1031 01:21:13.524667 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.524786 kubelet[2119]: E1031 01:21:13.524681 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.524874 kubelet[2119]: E1031 01:21:13.524851 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.524874 kubelet[2119]: W1031 01:21:13.524863 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.524952 kubelet[2119]: E1031 01:21:13.524877 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525046 kubelet[2119]: E1031 01:21:13.525026 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525046 kubelet[2119]: W1031 01:21:13.525038 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525116 kubelet[2119]: E1031 01:21:13.525049 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525185 kubelet[2119]: E1031 01:21:13.525176 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525185 kubelet[2119]: W1031 01:21:13.525183 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525242 kubelet[2119]: E1031 01:21:13.525193 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525329 kubelet[2119]: E1031 01:21:13.525318 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525329 kubelet[2119]: W1031 01:21:13.525326 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525408 kubelet[2119]: E1031 01:21:13.525335 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525519 kubelet[2119]: E1031 01:21:13.525500 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525519 kubelet[2119]: W1031 01:21:13.525508 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525519 kubelet[2119]: E1031 01:21:13.525517 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525659 kubelet[2119]: E1031 01:21:13.525645 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525659 kubelet[2119]: W1031 01:21:13.525656 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525742 kubelet[2119]: E1031 01:21:13.525672 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525822 kubelet[2119]: E1031 01:21:13.525810 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525822 kubelet[2119]: W1031 01:21:13.525820 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.525895 kubelet[2119]: E1031 01:21:13.525833 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.525984 kubelet[2119]: E1031 01:21:13.525971 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.525984 kubelet[2119]: W1031 01:21:13.525981 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526044 kubelet[2119]: E1031 01:21:13.525994 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526183 kubelet[2119]: E1031 01:21:13.526163 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526183 kubelet[2119]: W1031 01:21:13.526175 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526254 kubelet[2119]: E1031 01:21:13.526184 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526341 kubelet[2119]: E1031 01:21:13.526330 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526341 kubelet[2119]: W1031 01:21:13.526338 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526432 kubelet[2119]: E1031 01:21:13.526348 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526475 kubelet[2119]: E1031 01:21:13.526467 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526504 kubelet[2119]: W1031 01:21:13.526474 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526504 kubelet[2119]: E1031 01:21:13.526485 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526621 kubelet[2119]: E1031 01:21:13.526610 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526621 kubelet[2119]: W1031 01:21:13.526617 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526687 kubelet[2119]: E1031 01:21:13.526626 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526788 kubelet[2119]: E1031 01:21:13.526774 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526788 kubelet[2119]: W1031 01:21:13.526785 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.526862 kubelet[2119]: E1031 01:21:13.526800 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.526966 kubelet[2119]: E1031 01:21:13.526953 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.526966 kubelet[2119]: W1031 01:21:13.526963 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527039 kubelet[2119]: E1031 01:21:13.526978 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527119 kubelet[2119]: E1031 01:21:13.527106 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527119 kubelet[2119]: W1031 01:21:13.527115 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527195 kubelet[2119]: E1031 01:21:13.527129 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527283 kubelet[2119]: E1031 01:21:13.527270 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527283 kubelet[2119]: W1031 01:21:13.527280 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527364 kubelet[2119]: E1031 01:21:13.527293 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527454 kubelet[2119]: E1031 01:21:13.527442 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527454 kubelet[2119]: W1031 01:21:13.527452 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527534 kubelet[2119]: E1031 01:21:13.527472 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527587 kubelet[2119]: E1031 01:21:13.527575 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527587 kubelet[2119]: W1031 01:21:13.527585 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527662 kubelet[2119]: E1031 01:21:13.527598 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527741 kubelet[2119]: E1031 01:21:13.527726 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527741 kubelet[2119]: W1031 01:21:13.527734 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527863 kubelet[2119]: E1031 01:21:13.527747 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.527905 kubelet[2119]: E1031 01:21:13.527895 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.527905 kubelet[2119]: W1031 01:21:13.527904 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.527974 kubelet[2119]: E1031 01:21:13.527917 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.528085 kubelet[2119]: E1031 01:21:13.528072 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.528085 kubelet[2119]: W1031 01:21:13.528082 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.528164 kubelet[2119]: E1031 01:21:13.528094 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.528235 kubelet[2119]: E1031 01:21:13.528223 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.528235 kubelet[2119]: W1031 01:21:13.528233 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.528309 kubelet[2119]: E1031 01:21:13.528243 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.528419 kubelet[2119]: E1031 01:21:13.528407 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.528419 kubelet[2119]: W1031 01:21:13.528418 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.528489 kubelet[2119]: E1031 01:21:13.528427 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:13.970821 kubelet[2119]: E1031 01:21:13.970796 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:13.970821 kubelet[2119]: W1031 01:21:13.970814 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:13.970978 kubelet[2119]: E1031 01:21:13.970835 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:14.153677 env[1318]: time="2025-10-31T01:21:14.153598566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:14.153677 env[1318]: time="2025-10-31T01:21:14.153641156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:14.153677 env[1318]: time="2025-10-31T01:21:14.153653429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:14.154086 env[1318]: time="2025-10-31T01:21:14.153802820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b135e8b111daa2544f89462eca7a7cd7b906d71d85616cfa6c66097bb9172631 pid=2619 runtime=io.containerd.runc.v2 Oct 31 01:21:14.200648 env[1318]: time="2025-10-31T01:21:14.200589126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d699fdd5d-6nmqk,Uid:40c64caa-02a7-4100-b518-fee6eb00d4b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"b135e8b111daa2544f89462eca7a7cd7b906d71d85616cfa6c66097bb9172631\"" Oct 31 01:21:14.201203 kubelet[2119]: E1031 01:21:14.201179 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:14.202293 env[1318]: time="2025-10-31T01:21:14.202102064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 01:21:14.265289 env[1318]: time="2025-10-31T01:21:14.265173169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:14.265289 env[1318]: time="2025-10-31T01:21:14.265219697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:14.265289 env[1318]: time="2025-10-31T01:21:14.265235497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:14.265551 env[1318]: time="2025-10-31T01:21:14.265517618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79 pid=2659 runtime=io.containerd.runc.v2 Oct 31 01:21:14.292244 env[1318]: time="2025-10-31T01:21:14.292202541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwlfm,Uid:06b5b78c-8b24-4a41-98ea-2d3e8b323f01,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\"" Oct 31 01:21:14.292675 kubelet[2119]: E1031 01:21:14.292654 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:15.310178 kubelet[2119]: E1031 01:21:15.310136 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:16.210713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972452817.mount: Deactivated successfully. Oct 31 01:21:17.184451 env[1318]: time="2025-10-31T01:21:17.184362155Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:17.186443 env[1318]: time="2025-10-31T01:21:17.186405277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:17.188093 env[1318]: time="2025-10-31T01:21:17.188048488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:17.189566 env[1318]: time="2025-10-31T01:21:17.189529854Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:17.190155 env[1318]: time="2025-10-31T01:21:17.190106039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 01:21:17.192810 env[1318]: time="2025-10-31T01:21:17.192785367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 01:21:17.206529 env[1318]: time="2025-10-31T01:21:17.206418029Z" level=info msg="CreateContainer within sandbox \"b135e8b111daa2544f89462eca7a7cd7b906d71d85616cfa6c66097bb9172631\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 01:21:17.218818 env[1318]: time="2025-10-31T01:21:17.218777056Z" level=info msg="CreateContainer within sandbox \"b135e8b111daa2544f89462eca7a7cd7b906d71d85616cfa6c66097bb9172631\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1835ba6dc58ef6f6bb8d7486d80cb631395fa9fa622b37b849c0056cfeba10bf\"" Oct 31 01:21:17.219346 env[1318]: time="2025-10-31T01:21:17.219317302Z" level=info msg="StartContainer for \"1835ba6dc58ef6f6bb8d7486d80cb631395fa9fa622b37b849c0056cfeba10bf\"" Oct 31 01:21:17.310918 kubelet[2119]: E1031 01:21:17.310866 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:17.317881 env[1318]: time="2025-10-31T01:21:17.317815577Z" level=info msg="StartContainer for \"1835ba6dc58ef6f6bb8d7486d80cb631395fa9fa622b37b849c0056cfeba10bf\" returns successfully" Oct 31 01:21:17.394358 kubelet[2119]: E1031 01:21:17.394316 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:17.425346 kubelet[2119]: I1031 01:21:17.425279 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d699fdd5d-6nmqk" podStartSLOduration=2.433845491 podStartE2EDuration="5.424735354s" podCreationTimestamp="2025-10-31 01:21:12 +0000 UTC" firstStartedPulling="2025-10-31 01:21:14.201730484 +0000 UTC m=+22.027534739" lastFinishedPulling="2025-10-31 01:21:17.192620347 +0000 UTC m=+25.018424602" observedRunningTime="2025-10-31 01:21:17.412070963 +0000 UTC m=+25.237875218" watchObservedRunningTime="2025-10-31 01:21:17.424735354 +0000 UTC m=+25.250539609" Oct 31 01:21:17.439954 kubelet[2119]: E1031 01:21:17.439842 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.439954 kubelet[2119]: W1031 01:21:17.439876 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.439954 kubelet[2119]: E1031 01:21:17.439902 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.440137 kubelet[2119]: E1031 01:21:17.440117 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.440137 kubelet[2119]: W1031 01:21:17.440126 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.440186 kubelet[2119]: E1031 01:21:17.440135 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440299 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.440768 kubelet[2119]: W1031 01:21:17.440311 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440320 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440530 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.440768 kubelet[2119]: W1031 01:21:17.440539 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440547 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440724 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.440768 kubelet[2119]: W1031 01:21:17.440732 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.440768 kubelet[2119]: E1031 01:21:17.440742 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441001 kubelet[2119]: E1031 01:21:17.440939 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441001 kubelet[2119]: W1031 01:21:17.440950 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441001 kubelet[2119]: E1031 01:21:17.440959 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441166 kubelet[2119]: E1031 01:21:17.441144 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441166 kubelet[2119]: W1031 01:21:17.441157 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441166 kubelet[2119]: E1031 01:21:17.441165 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441298 kubelet[2119]: E1031 01:21:17.441279 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441346 kubelet[2119]: W1031 01:21:17.441300 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441346 kubelet[2119]: E1031 01:21:17.441311 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441504 kubelet[2119]: E1031 01:21:17.441479 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441504 kubelet[2119]: W1031 01:21:17.441495 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441504 kubelet[2119]: E1031 01:21:17.441504 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441687 kubelet[2119]: E1031 01:21:17.441657 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441687 kubelet[2119]: W1031 01:21:17.441682 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441775 kubelet[2119]: E1031 01:21:17.441693 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.441873 kubelet[2119]: E1031 01:21:17.441849 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.441873 kubelet[2119]: W1031 01:21:17.441865 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.441945 kubelet[2119]: E1031 01:21:17.441874 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.442064 kubelet[2119]: E1031 01:21:17.442038 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.442064 kubelet[2119]: W1031 01:21:17.442053 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.442064 kubelet[2119]: E1031 01:21:17.442062 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.442529 kubelet[2119]: E1031 01:21:17.442500 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.442582 kubelet[2119]: W1031 01:21:17.442533 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.442582 kubelet[2119]: E1031 01:21:17.442547 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.445367 kubelet[2119]: E1031 01:21:17.445322 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.445367 kubelet[2119]: W1031 01:21:17.445357 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.445505 kubelet[2119]: E1031 01:21:17.445374 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.445655 kubelet[2119]: E1031 01:21:17.445634 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.445655 kubelet[2119]: W1031 01:21:17.445651 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.445749 kubelet[2119]: E1031 01:21:17.445661 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.454170 kubelet[2119]: E1031 01:21:17.454137 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.454170 kubelet[2119]: W1031 01:21:17.454161 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.454267 kubelet[2119]: E1031 01:21:17.454180 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.454457 kubelet[2119]: E1031 01:21:17.454438 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.454457 kubelet[2119]: W1031 01:21:17.454451 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.454532 kubelet[2119]: E1031 01:21:17.454461 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.454682 kubelet[2119]: E1031 01:21:17.454652 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.454682 kubelet[2119]: W1031 01:21:17.454667 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.454682 kubelet[2119]: E1031 01:21:17.454684 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.454887 kubelet[2119]: E1031 01:21:17.454865 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.454887 kubelet[2119]: W1031 01:21:17.454877 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.454887 kubelet[2119]: E1031 01:21:17.454886 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.455035 kubelet[2119]: E1031 01:21:17.455014 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.455035 kubelet[2119]: W1031 01:21:17.455026 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.455035 kubelet[2119]: E1031 01:21:17.455033 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.455169 kubelet[2119]: E1031 01:21:17.455148 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.455169 kubelet[2119]: W1031 01:21:17.455160 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.455169 kubelet[2119]: E1031 01:21:17.455169 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.455344 kubelet[2119]: E1031 01:21:17.455322 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.455344 kubelet[2119]: W1031 01:21:17.455334 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.455344 kubelet[2119]: E1031 01:21:17.455341 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.455729 kubelet[2119]: E1031 01:21:17.455706 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.455729 kubelet[2119]: W1031 01:21:17.455721 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.455822 kubelet[2119]: E1031 01:21:17.455796 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.455910 kubelet[2119]: E1031 01:21:17.455889 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.455910 kubelet[2119]: W1031 01:21:17.455901 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.455980 kubelet[2119]: E1031 01:21:17.455950 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.456065 kubelet[2119]: E1031 01:21:17.456044 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.456065 kubelet[2119]: W1031 01:21:17.456056 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.456065 kubelet[2119]: E1031 01:21:17.456065 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.456206 kubelet[2119]: E1031 01:21:17.456185 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.456206 kubelet[2119]: W1031 01:21:17.456196 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.456206 kubelet[2119]: E1031 01:21:17.456203 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.456332 kubelet[2119]: E1031 01:21:17.456311 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.456332 kubelet[2119]: W1031 01:21:17.456323 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.456332 kubelet[2119]: E1031 01:21:17.456331 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.456501 kubelet[2119]: E1031 01:21:17.456480 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.456501 kubelet[2119]: W1031 01:21:17.456492 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.456501 kubelet[2119]: E1031 01:21:17.456499 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.457108 kubelet[2119]: E1031 01:21:17.457085 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.457108 kubelet[2119]: W1031 01:21:17.457100 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.457181 kubelet[2119]: E1031 01:21:17.457113 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.457288 kubelet[2119]: E1031 01:21:17.457266 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.457288 kubelet[2119]: W1031 01:21:17.457279 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.457288 kubelet[2119]: E1031 01:21:17.457285 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.457434 kubelet[2119]: E1031 01:21:17.457414 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.457434 kubelet[2119]: W1031 01:21:17.457425 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.457434 kubelet[2119]: E1031 01:21:17.457432 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.457579 kubelet[2119]: E1031 01:21:17.457558 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.457579 kubelet[2119]: W1031 01:21:17.457570 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.457579 kubelet[2119]: E1031 01:21:17.457577 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:17.457905 kubelet[2119]: E1031 01:21:17.457883 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:17.457905 kubelet[2119]: W1031 01:21:17.457895 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:17.457905 kubelet[2119]: E1031 01:21:17.457903 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.395446 kubelet[2119]: I1031 01:21:18.395415 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:21:18.395831 kubelet[2119]: E1031 01:21:18.395714 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:18.454227 kubelet[2119]: E1031 01:21:18.454195 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.454227 kubelet[2119]: W1031 01:21:18.454217 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.454403 kubelet[2119]: E1031 01:21:18.454238 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.454403 kubelet[2119]: E1031 01:21:18.454378 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.454403 kubelet[2119]: W1031 01:21:18.454400 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.454480 kubelet[2119]: E1031 01:21:18.454408 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.454567 kubelet[2119]: E1031 01:21:18.454553 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.454567 kubelet[2119]: W1031 01:21:18.454563 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.454618 kubelet[2119]: E1031 01:21:18.454570 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.454764 kubelet[2119]: E1031 01:21:18.454747 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.454764 kubelet[2119]: W1031 01:21:18.454758 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.454764 kubelet[2119]: E1031 01:21:18.454767 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.454945 kubelet[2119]: E1031 01:21:18.454927 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.454945 kubelet[2119]: W1031 01:21:18.454941 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455019 kubelet[2119]: E1031 01:21:18.454953 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.455116 kubelet[2119]: E1031 01:21:18.455100 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.455116 kubelet[2119]: W1031 01:21:18.455112 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455187 kubelet[2119]: E1031 01:21:18.455120 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.455254 kubelet[2119]: E1031 01:21:18.455240 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.455254 kubelet[2119]: W1031 01:21:18.455251 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455352 kubelet[2119]: E1031 01:21:18.455260 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.455427 kubelet[2119]: E1031 01:21:18.455412 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.455427 kubelet[2119]: W1031 01:21:18.455426 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455493 kubelet[2119]: E1031 01:21:18.455437 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.455642 kubelet[2119]: E1031 01:21:18.455627 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.455686 kubelet[2119]: W1031 01:21:18.455640 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455686 kubelet[2119]: E1031 01:21:18.455663 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.455925 kubelet[2119]: E1031 01:21:18.455901 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.455925 kubelet[2119]: W1031 01:21:18.455915 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.455925 kubelet[2119]: E1031 01:21:18.455925 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.456178 kubelet[2119]: E1031 01:21:18.456077 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.456178 kubelet[2119]: W1031 01:21:18.456086 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.456178 kubelet[2119]: E1031 01:21:18.456096 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.456301 kubelet[2119]: E1031 01:21:18.456287 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.456301 kubelet[2119]: W1031 01:21:18.456298 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.456393 kubelet[2119]: E1031 01:21:18.456309 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.456475 kubelet[2119]: E1031 01:21:18.456462 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.456475 kubelet[2119]: W1031 01:21:18.456472 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.456592 kubelet[2119]: E1031 01:21:18.456480 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.456703 kubelet[2119]: E1031 01:21:18.456687 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.456703 kubelet[2119]: W1031 01:21:18.456698 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.456802 kubelet[2119]: E1031 01:21:18.456707 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.456968 kubelet[2119]: E1031 01:21:18.456947 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.456968 kubelet[2119]: W1031 01:21:18.456967 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.457049 kubelet[2119]: E1031 01:21:18.456981 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.462217 kubelet[2119]: E1031 01:21:18.462198 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.462217 kubelet[2119]: W1031 01:21:18.462215 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.462357 kubelet[2119]: E1031 01:21:18.462232 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.462538 kubelet[2119]: E1031 01:21:18.462520 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.462538 kubelet[2119]: W1031 01:21:18.462534 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.462626 kubelet[2119]: E1031 01:21:18.462553 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.462849 kubelet[2119]: E1031 01:21:18.462833 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.462849 kubelet[2119]: W1031 01:21:18.462847 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.462951 kubelet[2119]: E1031 01:21:18.462874 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.463120 kubelet[2119]: E1031 01:21:18.463105 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.463120 kubelet[2119]: W1031 01:21:18.463116 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.463120 kubelet[2119]: E1031 01:21:18.463130 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.463431 kubelet[2119]: E1031 01:21:18.463400 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.463468 kubelet[2119]: W1031 01:21:18.463429 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.463468 kubelet[2119]: E1031 01:21:18.463456 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.463642 kubelet[2119]: E1031 01:21:18.463627 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.463642 kubelet[2119]: W1031 01:21:18.463637 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.463745 kubelet[2119]: E1031 01:21:18.463661 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.463860 kubelet[2119]: E1031 01:21:18.463846 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.463860 kubelet[2119]: W1031 01:21:18.463857 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.463992 kubelet[2119]: E1031 01:21:18.463972 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.464143 kubelet[2119]: E1031 01:21:18.464113 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.464143 kubelet[2119]: W1031 01:21:18.464134 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.464204 kubelet[2119]: E1031 01:21:18.464172 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.464322 kubelet[2119]: E1031 01:21:18.464306 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.464322 kubelet[2119]: W1031 01:21:18.464316 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.464405 kubelet[2119]: E1031 01:21:18.464329 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.464593 kubelet[2119]: E1031 01:21:18.464578 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.464635 kubelet[2119]: W1031 01:21:18.464598 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.464692 kubelet[2119]: E1031 01:21:18.464633 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.464914 kubelet[2119]: E1031 01:21:18.464897 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.464914 kubelet[2119]: W1031 01:21:18.464911 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.464997 kubelet[2119]: E1031 01:21:18.464925 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.465156 kubelet[2119]: E1031 01:21:18.465141 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.465156 kubelet[2119]: W1031 01:21:18.465154 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.465231 kubelet[2119]: E1031 01:21:18.465167 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.465462 kubelet[2119]: E1031 01:21:18.465444 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.465462 kubelet[2119]: W1031 01:21:18.465458 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.465541 kubelet[2119]: E1031 01:21:18.465476 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.465741 kubelet[2119]: E1031 01:21:18.465726 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.465741 kubelet[2119]: W1031 01:21:18.465738 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.465810 kubelet[2119]: E1031 01:21:18.465753 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.465968 kubelet[2119]: E1031 01:21:18.465957 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.465968 kubelet[2119]: W1031 01:21:18.465967 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.466051 kubelet[2119]: E1031 01:21:18.465980 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.466259 kubelet[2119]: E1031 01:21:18.466241 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.466259 kubelet[2119]: W1031 01:21:18.466253 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.466329 kubelet[2119]: E1031 01:21:18.466267 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.466506 kubelet[2119]: E1031 01:21:18.466490 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.466506 kubelet[2119]: W1031 01:21:18.466502 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.466585 kubelet[2119]: E1031 01:21:18.466516 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.466750 kubelet[2119]: E1031 01:21:18.466735 2119 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:21:18.466750 kubelet[2119]: W1031 01:21:18.466747 2119 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:21:18.466805 kubelet[2119]: E1031 01:21:18.466755 2119 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:21:18.559000 audit[2812]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:18.559000 audit[2812]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd547525e0 a2=0 a3=7ffd547525cc items=0 ppid=2244 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:18.573678 kernel: audit: type=1325 audit(1761873678.559:279): table=filter:101 family=2 entries=21 op=nft_register_rule pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:18.573757 kernel: audit: type=1300 audit(1761873678.559:279): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd547525e0 a2=0 a3=7ffd547525cc items=0 ppid=2244 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:18.573785 kernel: audit: type=1327 audit(1761873678.559:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:18.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:18.565000 audit[2812]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:18.565000 audit[2812]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd547525e0 a2=0 a3=7ffd547525cc items=0 ppid=2244 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:18.589042 kernel: audit: type=1325 audit(1761873678.565:280): table=nat:102 family=2 entries=19 op=nft_register_chain pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:18.589148 kernel: audit: type=1300 audit(1761873678.565:280): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd547525e0 a2=0 a3=7ffd547525cc items=0 ppid=2244 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:18.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:18.592980 kernel: audit: type=1327 audit(1761873678.565:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:18.644889 env[1318]: time="2025-10-31T01:21:18.644839371Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:18.646832 env[1318]: time="2025-10-31T01:21:18.646762127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:18.648447 env[1318]: time="2025-10-31T01:21:18.648420074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:18.650095 env[1318]: time="2025-10-31T01:21:18.650072702Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:18.650597 env[1318]: time="2025-10-31T01:21:18.650574837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 01:21:18.652448 env[1318]: time="2025-10-31T01:21:18.652411210Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 01:21:18.666875 env[1318]: time="2025-10-31T01:21:18.666801872Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226\"" Oct 31 01:21:18.667290 env[1318]: time="2025-10-31T01:21:18.667264894Z" level=info msg="StartContainer for \"98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226\"" Oct 31 01:21:18.725029 env[1318]: time="2025-10-31T01:21:18.724965763Z" level=info msg="StartContainer for \"98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226\" returns successfully" Oct 31 01:21:18.756268 env[1318]: time="2025-10-31T01:21:18.756219738Z" level=info msg="shim disconnected" id=98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226 Oct 31 01:21:18.756268 env[1318]: time="2025-10-31T01:21:18.756264722Z" level=warning msg="cleaning up after shim disconnected" id=98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226 namespace=k8s.io Oct 31 01:21:18.756268 env[1318]: time="2025-10-31T01:21:18.756274511Z" level=info msg="cleaning up dead shim" Oct 31 01:21:18.764132 env[1318]: time="2025-10-31T01:21:18.764091591Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:21:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2858 runtime=io.containerd.runc.v2\n" Oct 31 01:21:19.199814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98f361e3154bb6d9212950d56a5473a6e0bb01d19910f784cebfcce912852226-rootfs.mount: Deactivated successfully. Oct 31 01:21:19.310277 kubelet[2119]: E1031 01:21:19.310218 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:19.398444 kubelet[2119]: E1031 01:21:19.398413 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:19.398987 kubelet[2119]: E1031 01:21:19.398954 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:19.399755 env[1318]: time="2025-10-31T01:21:19.399718576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 01:21:20.400252 kubelet[2119]: E1031 01:21:20.400218 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:21.310917 kubelet[2119]: E1031 01:21:21.310870 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:22.659594 env[1318]: time="2025-10-31T01:21:22.659519473Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:22.661400 env[1318]: time="2025-10-31T01:21:22.661347769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:22.662727 env[1318]: time="2025-10-31T01:21:22.662698547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:22.664399 env[1318]: time="2025-10-31T01:21:22.664334312Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:22.664944 env[1318]: time="2025-10-31T01:21:22.664905565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 01:21:22.671887 env[1318]: time="2025-10-31T01:21:22.671828305Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 01:21:22.685769 env[1318]: time="2025-10-31T01:21:22.685706268Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36\"" Oct 31 01:21:22.686187 env[1318]: time="2025-10-31T01:21:22.686153568Z" level=info msg="StartContainer for \"47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36\"" Oct 31 01:21:22.741720 env[1318]: time="2025-10-31T01:21:22.739756058Z" level=info msg="StartContainer for \"47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36\" returns successfully" Oct 31 01:21:23.310514 kubelet[2119]: E1031 01:21:23.310453 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:23.525508 kubelet[2119]: E1031 01:21:23.525473 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:24.344634 env[1318]: time="2025-10-31T01:21:24.344542033Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:21:24.361011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36-rootfs.mount: Deactivated successfully. Oct 31 01:21:24.363074 env[1318]: time="2025-10-31T01:21:24.363024921Z" level=info msg="shim disconnected" id=47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36 Oct 31 01:21:24.363074 env[1318]: time="2025-10-31T01:21:24.363063122Z" level=warning msg="cleaning up after shim disconnected" id=47e553c4762d786e3f5234b0a7ba183e13cb897693c88d81db0a438c337e9f36 namespace=k8s.io Oct 31 01:21:24.363074 env[1318]: time="2025-10-31T01:21:24.363070506Z" level=info msg="cleaning up dead shim" Oct 31 01:21:24.368411 env[1318]: time="2025-10-31T01:21:24.368366967Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:21:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2929 runtime=io.containerd.runc.v2\n" Oct 31 01:21:24.410785 kubelet[2119]: I1031 01:21:24.410755 2119 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 01:21:24.527727 kubelet[2119]: E1031 01:21:24.527697 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:24.528198 env[1318]: time="2025-10-31T01:21:24.528171451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 01:21:24.558485 kubelet[2119]: I1031 01:21:24.558430 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-ca-bundle\") pod \"whisker-757d7d9c66-vd262\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " pod="calico-system/whisker-757d7d9c66-vd262" Oct 31 01:21:24.558485 kubelet[2119]: I1031 01:21:24.558472 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzcr\" (UniqueName: \"kubernetes.io/projected/f883da0a-4f39-47f1-824b-f2e94084a2d5-kube-api-access-vqzcr\") pod \"calico-apiserver-5df7bf54df-pqphd\" (UID: \"f883da0a-4f39-47f1-824b-f2e94084a2d5\") " pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" Oct 31 01:21:24.558715 kubelet[2119]: I1031 01:21:24.558518 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5lpw\" (UniqueName: \"kubernetes.io/projected/30ef7351-e113-44f3-84eb-f1e0f60f06cf-kube-api-access-x5lpw\") pod \"coredns-668d6bf9bc-rnsbn\" (UID: \"30ef7351-e113-44f3-84eb-f1e0f60f06cf\") " pod="kube-system/coredns-668d6bf9bc-rnsbn" Oct 31 01:21:24.558715 kubelet[2119]: I1031 01:21:24.558549 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-784c7\" (UniqueName: \"kubernetes.io/projected/aa2fbf03-d734-4df0-9482-3da8a7ab55e1-kube-api-access-784c7\") pod \"calico-kube-controllers-86b466566-mfnxs\" (UID: \"aa2fbf03-d734-4df0-9482-3da8a7ab55e1\") " pod="calico-system/calico-kube-controllers-86b466566-mfnxs" Oct 31 01:21:24.558715 kubelet[2119]: I1031 01:21:24.558648 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06e5831d-75dc-4025-8be9-9be7b711ddfe-calico-apiserver-certs\") pod \"calico-apiserver-5df7bf54df-2pcg2\" (UID: \"06e5831d-75dc-4025-8be9-9be7b711ddfe\") " pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" Oct 31 01:21:24.558715 kubelet[2119]: I1031 01:21:24.558671 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-backend-key-pair\") pod \"whisker-757d7d9c66-vd262\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " pod="calico-system/whisker-757d7d9c66-vd262" Oct 31 01:21:24.558715 kubelet[2119]: I1031 01:21:24.558684 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7147f3bc-4883-48d8-85dc-189c66dbfbd3-goldmane-ca-bundle\") pod \"goldmane-666569f655-vzlbq\" (UID: \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\") " pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.558913 kubelet[2119]: I1031 01:21:24.558702 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03e81ffc-7bc8-4496-a870-f2e322aeb1d9-config-volume\") pod \"coredns-668d6bf9bc-lgcnx\" (UID: \"03e81ffc-7bc8-4496-a870-f2e322aeb1d9\") " pod="kube-system/coredns-668d6bf9bc-lgcnx" Oct 31 01:21:24.558913 kubelet[2119]: I1031 01:21:24.558720 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xllgf\" (UniqueName: \"kubernetes.io/projected/06e5831d-75dc-4025-8be9-9be7b711ddfe-kube-api-access-xllgf\") pod \"calico-apiserver-5df7bf54df-2pcg2\" (UID: \"06e5831d-75dc-4025-8be9-9be7b711ddfe\") " pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" Oct 31 01:21:24.558913 kubelet[2119]: I1031 01:21:24.558732 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7147f3bc-4883-48d8-85dc-189c66dbfbd3-goldmane-key-pair\") pod \"goldmane-666569f655-vzlbq\" (UID: \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\") " pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.558913 kubelet[2119]: I1031 01:21:24.558746 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa2fbf03-d734-4df0-9482-3da8a7ab55e1-tigera-ca-bundle\") pod \"calico-kube-controllers-86b466566-mfnxs\" (UID: \"aa2fbf03-d734-4df0-9482-3da8a7ab55e1\") " pod="calico-system/calico-kube-controllers-86b466566-mfnxs" Oct 31 01:21:24.558913 kubelet[2119]: I1031 01:21:24.558759 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv7g\" (UniqueName: \"kubernetes.io/projected/7147f3bc-4883-48d8-85dc-189c66dbfbd3-kube-api-access-gdv7g\") pod \"goldmane-666569f655-vzlbq\" (UID: \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\") " pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.559058 kubelet[2119]: I1031 01:21:24.558772 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30ef7351-e113-44f3-84eb-f1e0f60f06cf-config-volume\") pod \"coredns-668d6bf9bc-rnsbn\" (UID: \"30ef7351-e113-44f3-84eb-f1e0f60f06cf\") " pod="kube-system/coredns-668d6bf9bc-rnsbn" Oct 31 01:21:24.559058 kubelet[2119]: I1031 01:21:24.558783 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f883da0a-4f39-47f1-824b-f2e94084a2d5-calico-apiserver-certs\") pod \"calico-apiserver-5df7bf54df-pqphd\" (UID: \"f883da0a-4f39-47f1-824b-f2e94084a2d5\") " pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" Oct 31 01:21:24.559058 kubelet[2119]: I1031 01:21:24.558796 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6cl\" (UniqueName: \"kubernetes.io/projected/03e81ffc-7bc8-4496-a870-f2e322aeb1d9-kube-api-access-xz6cl\") pod \"coredns-668d6bf9bc-lgcnx\" (UID: \"03e81ffc-7bc8-4496-a870-f2e322aeb1d9\") " pod="kube-system/coredns-668d6bf9bc-lgcnx" Oct 31 01:21:24.559058 kubelet[2119]: I1031 01:21:24.558810 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7147f3bc-4883-48d8-85dc-189c66dbfbd3-config\") pod \"goldmane-666569f655-vzlbq\" (UID: \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\") " pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.559058 kubelet[2119]: I1031 01:21:24.558826 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g74sd\" (UniqueName: \"kubernetes.io/projected/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-kube-api-access-g74sd\") pod \"whisker-757d7d9c66-vd262\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " pod="calico-system/whisker-757d7d9c66-vd262" Oct 31 01:21:24.738330 kubelet[2119]: E1031 01:21:24.738212 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:24.739202 env[1318]: time="2025-10-31T01:21:24.738891613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnsbn,Uid:30ef7351-e113-44f3-84eb-f1e0f60f06cf,Namespace:kube-system,Attempt:0,}" Oct 31 01:21:24.742540 kubelet[2119]: E1031 01:21:24.742511 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:24.742903 env[1318]: time="2025-10-31T01:21:24.742871741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgcnx,Uid:03e81ffc-7bc8-4496-a870-f2e322aeb1d9,Namespace:kube-system,Attempt:0,}" Oct 31 01:21:24.743187 env[1318]: time="2025-10-31T01:21:24.743165152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vzlbq,Uid:7147f3bc-4883-48d8-85dc-189c66dbfbd3,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:24.747294 env[1318]: time="2025-10-31T01:21:24.747268201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-2pcg2,Uid:06e5831d-75dc-4025-8be9-9be7b711ddfe,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:21:24.747442 env[1318]: time="2025-10-31T01:21:24.747421449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b466566-mfnxs,Uid:aa2fbf03-d734-4df0-9482-3da8a7ab55e1,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:24.750008 env[1318]: time="2025-10-31T01:21:24.749976029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-pqphd,Uid:f883da0a-4f39-47f1-824b-f2e94084a2d5,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:21:24.751497 env[1318]: time="2025-10-31T01:21:24.751463082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757d7d9c66-vd262,Uid:8d6ffb46-9589-48d4-a13a-509b4aec6d5f,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:24.921264 env[1318]: time="2025-10-31T01:21:24.921198196Z" level=error msg="Failed to destroy network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.921541 env[1318]: time="2025-10-31T01:21:24.921517185Z" level=error msg="encountered an error cleaning up failed sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.921607 env[1318]: time="2025-10-31T01:21:24.921558152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b466566-mfnxs,Uid:aa2fbf03-d734-4df0-9482-3da8a7ab55e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.922750 kubelet[2119]: E1031 01:21:24.921763 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.922750 kubelet[2119]: E1031 01:21:24.921844 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" Oct 31 01:21:24.922750 kubelet[2119]: E1031 01:21:24.921865 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" Oct 31 01:21:24.922887 kubelet[2119]: E1031 01:21:24.921903 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86b466566-mfnxs_calico-system(aa2fbf03-d734-4df0-9482-3da8a7ab55e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86b466566-mfnxs_calico-system(aa2fbf03-d734-4df0-9482-3da8a7ab55e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:21:24.941589 env[1318]: time="2025-10-31T01:21:24.941526551Z" level=error msg="Failed to destroy network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.941906 env[1318]: time="2025-10-31T01:21:24.941878903Z" level=error msg="encountered an error cleaning up failed sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.941960 env[1318]: time="2025-10-31T01:21:24.941930820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnsbn,Uid:30ef7351-e113-44f3-84eb-f1e0f60f06cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943040 kubelet[2119]: E1031 01:21:24.942145 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943040 kubelet[2119]: E1031 01:21:24.942207 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rnsbn" Oct 31 01:21:24.943040 kubelet[2119]: E1031 01:21:24.942227 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rnsbn" Oct 31 01:21:24.943160 env[1318]: time="2025-10-31T01:21:24.942339628Z" level=error msg="Failed to destroy network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943160 env[1318]: time="2025-10-31T01:21:24.942691640Z" level=error msg="encountered an error cleaning up failed sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943160 env[1318]: time="2025-10-31T01:21:24.942764026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757d7d9c66-vd262,Uid:8d6ffb46-9589-48d4-a13a-509b4aec6d5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943267 kubelet[2119]: E1031 01:21:24.942263 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rnsbn_kube-system(30ef7351-e113-44f3-84eb-f1e0f60f06cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rnsbn_kube-system(30ef7351-e113-44f3-84eb-f1e0f60f06cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rnsbn" podUID="30ef7351-e113-44f3-84eb-f1e0f60f06cf" Oct 31 01:21:24.943267 kubelet[2119]: E1031 01:21:24.942921 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.943267 kubelet[2119]: E1031 01:21:24.942958 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757d7d9c66-vd262" Oct 31 01:21:24.943364 kubelet[2119]: E1031 01:21:24.942975 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757d7d9c66-vd262" Oct 31 01:21:24.943364 kubelet[2119]: E1031 01:21:24.943046 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-757d7d9c66-vd262_calico-system(8d6ffb46-9589-48d4-a13a-509b4aec6d5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-757d7d9c66-vd262_calico-system(8d6ffb46-9589-48d4-a13a-509b4aec6d5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757d7d9c66-vd262" podUID="8d6ffb46-9589-48d4-a13a-509b4aec6d5f" Oct 31 01:21:24.943752 env[1318]: time="2025-10-31T01:21:24.943627388Z" level=error msg="Failed to destroy network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.944084 env[1318]: time="2025-10-31T01:21:24.944059639Z" level=error msg="encountered an error cleaning up failed sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.944234 env[1318]: time="2025-10-31T01:21:24.944185897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vzlbq,Uid:7147f3bc-4883-48d8-85dc-189c66dbfbd3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.944528 kubelet[2119]: E1031 01:21:24.944446 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.944528 kubelet[2119]: E1031 01:21:24.944502 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.944607 kubelet[2119]: E1031 01:21:24.944523 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vzlbq" Oct 31 01:21:24.944607 kubelet[2119]: E1031 01:21:24.944563 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vzlbq_calico-system(7147f3bc-4883-48d8-85dc-189c66dbfbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vzlbq_calico-system(7147f3bc-4883-48d8-85dc-189c66dbfbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:24.949345 env[1318]: time="2025-10-31T01:21:24.949283034Z" level=error msg="Failed to destroy network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.949964 env[1318]: time="2025-10-31T01:21:24.949924259Z" level=error msg="encountered an error cleaning up failed sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.950045 env[1318]: time="2025-10-31T01:21:24.949986696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgcnx,Uid:03e81ffc-7bc8-4496-a870-f2e322aeb1d9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.950224 kubelet[2119]: E1031 01:21:24.950188 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.950339 kubelet[2119]: E1031 01:21:24.950250 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgcnx" Oct 31 01:21:24.950339 kubelet[2119]: E1031 01:21:24.950286 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgcnx" Oct 31 01:21:24.950528 kubelet[2119]: E1031 01:21:24.950340 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lgcnx_kube-system(03e81ffc-7bc8-4496-a870-f2e322aeb1d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lgcnx_kube-system(03e81ffc-7bc8-4496-a870-f2e322aeb1d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lgcnx" podUID="03e81ffc-7bc8-4496-a870-f2e322aeb1d9" Oct 31 01:21:24.957169 env[1318]: time="2025-10-31T01:21:24.957107005Z" level=error msg="Failed to destroy network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.957697 env[1318]: time="2025-10-31T01:21:24.957665765Z" level=error msg="encountered an error cleaning up failed sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.957753 env[1318]: time="2025-10-31T01:21:24.957714306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-pqphd,Uid:f883da0a-4f39-47f1-824b-f2e94084a2d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.957941 kubelet[2119]: E1031 01:21:24.957906 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.958033 kubelet[2119]: E1031 01:21:24.957963 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" Oct 31 01:21:24.958033 kubelet[2119]: E1031 01:21:24.957990 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" Oct 31 01:21:24.958105 kubelet[2119]: E1031 01:21:24.958041 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7bf54df-pqphd_calico-apiserver(f883da0a-4f39-47f1-824b-f2e94084a2d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7bf54df-pqphd_calico-apiserver(f883da0a-4f39-47f1-824b-f2e94084a2d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:21:24.965724 env[1318]: time="2025-10-31T01:21:24.965657210Z" level=error msg="Failed to destroy network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.965982 env[1318]: time="2025-10-31T01:21:24.965958796Z" level=error msg="encountered an error cleaning up failed sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.966025 env[1318]: time="2025-10-31T01:21:24.966006195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-2pcg2,Uid:06e5831d-75dc-4025-8be9-9be7b711ddfe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.966219 kubelet[2119]: E1031 01:21:24.966183 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:24.966275 kubelet[2119]: E1031 01:21:24.966242 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" Oct 31 01:21:24.966275 kubelet[2119]: E1031 01:21:24.966266 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" Oct 31 01:21:24.966343 kubelet[2119]: E1031 01:21:24.966310 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df7bf54df-2pcg2_calico-apiserver(06e5831d-75dc-4025-8be9-9be7b711ddfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df7bf54df-2pcg2_calico-apiserver(06e5831d-75dc-4025-8be9-9be7b711ddfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:25.312350 env[1318]: time="2025-10-31T01:21:25.312280527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9l4v,Uid:9ef33ba9-4950-4b3a-9079-7b7964e46235,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:25.366346 env[1318]: time="2025-10-31T01:21:25.366282089Z" level=error msg="Failed to destroy network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.366709 env[1318]: time="2025-10-31T01:21:25.366646263Z" level=error msg="encountered an error cleaning up failed sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.366709 env[1318]: time="2025-10-31T01:21:25.366684935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9l4v,Uid:9ef33ba9-4950-4b3a-9079-7b7964e46235,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.366941 kubelet[2119]: E1031 01:21:25.366899 2119 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.367014 kubelet[2119]: E1031 01:21:25.366960 2119 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:25.367014 kubelet[2119]: E1031 01:21:25.366984 2119 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9l4v" Oct 31 01:21:25.367063 kubelet[2119]: E1031 01:21:25.367028 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:25.368658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2-shm.mount: Deactivated successfully. Oct 31 01:21:25.530345 kubelet[2119]: I1031 01:21:25.530313 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:25.530924 env[1318]: time="2025-10-31T01:21:25.530884240Z" level=info msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" Oct 31 01:21:25.531613 kubelet[2119]: I1031 01:21:25.531550 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:25.532092 env[1318]: time="2025-10-31T01:21:25.532050652Z" level=info msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" Oct 31 01:21:25.533428 kubelet[2119]: I1031 01:21:25.533198 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:25.533690 env[1318]: time="2025-10-31T01:21:25.533662972Z" level=info msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" Oct 31 01:21:25.535107 kubelet[2119]: I1031 01:21:25.535081 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:25.535638 env[1318]: time="2025-10-31T01:21:25.535596564Z" level=info msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" Oct 31 01:21:25.537377 env[1318]: time="2025-10-31T01:21:25.537315473Z" level=info msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" Oct 31 01:21:25.537883 kubelet[2119]: I1031 01:21:25.536824 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:25.538988 kubelet[2119]: I1031 01:21:25.538662 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:25.539331 env[1318]: time="2025-10-31T01:21:25.539300252Z" level=info msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" Oct 31 01:21:25.541346 kubelet[2119]: I1031 01:21:25.541311 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:25.542242 env[1318]: time="2025-10-31T01:21:25.542160055Z" level=info msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" Oct 31 01:21:25.543029 kubelet[2119]: I1031 01:21:25.543001 2119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:25.543544 env[1318]: time="2025-10-31T01:21:25.543524468Z" level=info msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" Oct 31 01:21:25.567365 env[1318]: time="2025-10-31T01:21:25.567255230Z" level=error msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" failed" error="failed to destroy network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.567924 kubelet[2119]: E1031 01:21:25.567735 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:25.567924 kubelet[2119]: E1031 01:21:25.567799 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84"} Oct 31 01:21:25.567924 kubelet[2119]: E1031 01:21:25.567865 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06e5831d-75dc-4025-8be9-9be7b711ddfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.567924 kubelet[2119]: E1031 01:21:25.567887 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06e5831d-75dc-4025-8be9-9be7b711ddfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:25.570030 env[1318]: time="2025-10-31T01:21:25.569957006Z" level=error msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" failed" error="failed to destroy network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.570237 kubelet[2119]: E1031 01:21:25.570076 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:25.570237 kubelet[2119]: E1031 01:21:25.570099 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2"} Oct 31 01:21:25.570237 kubelet[2119]: E1031 01:21:25.570118 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ef33ba9-4950-4b3a-9079-7b7964e46235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.570237 kubelet[2119]: E1031 01:21:25.570133 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ef33ba9-4950-4b3a-9079-7b7964e46235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:25.591306 env[1318]: time="2025-10-31T01:21:25.591118310Z" level=error msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" failed" error="failed to destroy network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.591737 kubelet[2119]: E1031 01:21:25.591692 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:25.591792 kubelet[2119]: E1031 01:21:25.591771 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9"} Oct 31 01:21:25.591863 kubelet[2119]: E1031 01:21:25.591809 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30ef7351-e113-44f3-84eb-f1e0f60f06cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.591927 kubelet[2119]: E1031 01:21:25.591871 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30ef7351-e113-44f3-84eb-f1e0f60f06cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rnsbn" podUID="30ef7351-e113-44f3-84eb-f1e0f60f06cf" Oct 31 01:21:25.592018 env[1318]: time="2025-10-31T01:21:25.591970962Z" level=error msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" failed" error="failed to destroy network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.592151 kubelet[2119]: E1031 01:21:25.592120 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:25.592151 kubelet[2119]: E1031 01:21:25.592147 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a"} Oct 31 01:21:25.592219 kubelet[2119]: E1031 01:21:25.592199 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa2fbf03-d734-4df0-9482-3da8a7ab55e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.592268 kubelet[2119]: E1031 01:21:25.592232 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa2fbf03-d734-4df0-9482-3da8a7ab55e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:21:25.598055 env[1318]: time="2025-10-31T01:21:25.597992765Z" level=error msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" failed" error="failed to destroy network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.598410 kubelet[2119]: E1031 01:21:25.598371 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:25.598410 kubelet[2119]: E1031 01:21:25.598409 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05"} Oct 31 01:21:25.598527 kubelet[2119]: E1031 01:21:25.598429 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.598527 kubelet[2119]: E1031 01:21:25.598446 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757d7d9c66-vd262" podUID="8d6ffb46-9589-48d4-a13a-509b4aec6d5f" Oct 31 01:21:25.607023 env[1318]: time="2025-10-31T01:21:25.606968187Z" level=error msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" failed" error="failed to destroy network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.607355 kubelet[2119]: E1031 01:21:25.607326 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:25.607355 kubelet[2119]: E1031 01:21:25.607352 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89"} Oct 31 01:21:25.607499 kubelet[2119]: E1031 01:21:25.607372 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03e81ffc-7bc8-4496-a870-f2e322aeb1d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.607499 kubelet[2119]: E1031 01:21:25.607412 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03e81ffc-7bc8-4496-a870-f2e322aeb1d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lgcnx" podUID="03e81ffc-7bc8-4496-a870-f2e322aeb1d9" Oct 31 01:21:25.609630 env[1318]: time="2025-10-31T01:21:25.609563704Z" level=error msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" failed" error="failed to destroy network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.609965 kubelet[2119]: E1031 01:21:25.609870 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:25.609965 kubelet[2119]: E1031 01:21:25.609898 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002"} Oct 31 01:21:25.609965 kubelet[2119]: E1031 01:21:25.609919 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.609965 kubelet[2119]: E1031 01:21:25.609935 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7147f3bc-4883-48d8-85dc-189c66dbfbd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:25.619045 env[1318]: time="2025-10-31T01:21:25.618963844Z" level=error msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" failed" error="failed to destroy network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:21:25.619268 kubelet[2119]: E1031 01:21:25.619185 2119 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:25.619268 kubelet[2119]: E1031 01:21:25.619210 2119 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8"} Oct 31 01:21:25.619268 kubelet[2119]: E1031 01:21:25.619232 2119 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f883da0a-4f39-47f1-824b-f2e94084a2d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:21:25.619268 kubelet[2119]: E1031 01:21:25.619248 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f883da0a-4f39-47f1-824b-f2e94084a2d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:21:30.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.140:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:30.306173 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:54432.service. Oct 31 01:21:30.313484 kernel: audit: type=1130 audit(1761873690.304:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.140:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:30.345000 audit[3371]: USER_ACCT pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.353789 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 54432 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:30.352000 audit[3371]: CRED_ACQ pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.354423 kernel: audit: type=1101 audit(1761873690.345:282): pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.354480 kernel: audit: type=1103 audit(1761873690.352:283): pid=3371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.360651 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:30.365171 kernel: audit: type=1006 audit(1761873690.358:284): pid=3371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Oct 31 01:21:30.365211 kernel: audit: type=1300 audit(1761873690.358:284): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8f09dce0 a2=3 a3=0 items=0 ppid=1 pid=3371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:30.358000 audit[3371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8f09dce0 a2=3 a3=0 items=0 ppid=1 pid=3371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:30.377904 kernel: audit: type=1327 audit(1761873690.358:284): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:30.358000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:30.378743 systemd[1]: Started session-8.scope. Oct 31 01:21:30.379122 systemd-logind[1300]: New session 8 of user core. Oct 31 01:21:30.386000 audit[3371]: USER_START pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.398208 kernel: audit: type=1105 audit(1761873690.386:285): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.398273 kernel: audit: type=1103 audit(1761873690.387:286): pid=3374 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.387000 audit[3374]: CRED_ACQ pid=3374 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.526318 sshd[3371]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:30.525000 audit[3371]: USER_END pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.528434 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:54432.service: Deactivated successfully. Oct 31 01:21:30.529132 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 01:21:30.540708 kernel: audit: type=1106 audit(1761873690.525:287): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.540764 kernel: audit: type=1104 audit(1761873690.525:288): pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.525000 audit[3371]: CRED_DISP pid=3371 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:30.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.140:22-10.0.0.1:54432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:30.535198 systemd-logind[1300]: Session 8 logged out. Waiting for processes to exit. Oct 31 01:21:30.535836 systemd-logind[1300]: Removed session 8. Oct 31 01:21:30.929049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820844042.mount: Deactivated successfully. Oct 31 01:21:32.662903 env[1318]: time="2025-10-31T01:21:32.662843278Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:32.689011 env[1318]: time="2025-10-31T01:21:32.688967907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:32.703759 env[1318]: time="2025-10-31T01:21:32.703726115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:32.715859 env[1318]: time="2025-10-31T01:21:32.715825069Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:21:32.716143 env[1318]: time="2025-10-31T01:21:32.716113421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 01:21:32.726402 env[1318]: time="2025-10-31T01:21:32.726355621Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 01:21:32.885244 env[1318]: time="2025-10-31T01:21:32.885188273Z" level=info msg="CreateContainer within sandbox \"ec2a9a72882d304f5b24c94d218201830fe9c9fd286201df601369f105d6fb79\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04a929cd8ef9272fa81c1dbb60325ad7a3f1ea4b2848cfc93a7baf794c844df8\"" Oct 31 01:21:32.886499 env[1318]: time="2025-10-31T01:21:32.886458508Z" level=info msg="StartContainer for \"04a929cd8ef9272fa81c1dbb60325ad7a3f1ea4b2848cfc93a7baf794c844df8\"" Oct 31 01:21:32.936865 env[1318]: time="2025-10-31T01:21:32.936748205Z" level=info msg="StartContainer for \"04a929cd8ef9272fa81c1dbb60325ad7a3f1ea4b2848cfc93a7baf794c844df8\" returns successfully" Oct 31 01:21:33.001535 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 01:21:33.001657 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 01:21:33.082762 env[1318]: time="2025-10-31T01:21:33.082684725Z" level=info msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.139 [INFO][3454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.139 [INFO][3454] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" iface="eth0" netns="/var/run/netns/cni-82c98614-5ffc-4bfb-10af-520cb0dc6266" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.140 [INFO][3454] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" iface="eth0" netns="/var/run/netns/cni-82c98614-5ffc-4bfb-10af-520cb0dc6266" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.141 [INFO][3454] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" iface="eth0" netns="/var/run/netns/cni-82c98614-5ffc-4bfb-10af-520cb0dc6266" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.141 [INFO][3454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.141 [INFO][3454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.192 [INFO][3464] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.192 [INFO][3464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.192 [INFO][3464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.200 [WARNING][3464] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.200 [INFO][3464] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.201 [INFO][3464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:33.204590 env[1318]: 2025-10-31 01:21:33.203 [INFO][3454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:33.205028 env[1318]: time="2025-10-31T01:21:33.204678056Z" level=info msg="TearDown network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" successfully" Oct 31 01:21:33.205028 env[1318]: time="2025-10-31T01:21:33.204725826Z" level=info msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" returns successfully" Oct 31 01:21:33.316576 kubelet[2119]: I1031 01:21:33.316530 2119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g74sd\" (UniqueName: \"kubernetes.io/projected/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-kube-api-access-g74sd\") pod \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " Oct 31 01:21:33.316576 kubelet[2119]: I1031 01:21:33.316583 2119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-backend-key-pair\") pod \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " Oct 31 01:21:33.317047 kubelet[2119]: I1031 01:21:33.316609 2119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-ca-bundle\") pod \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\" (UID: \"8d6ffb46-9589-48d4-a13a-509b4aec6d5f\") " Oct 31 01:21:33.317047 kubelet[2119]: I1031 01:21:33.316959 2119 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8d6ffb46-9589-48d4-a13a-509b4aec6d5f" (UID: "8d6ffb46-9589-48d4-a13a-509b4aec6d5f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 01:21:33.319132 kubelet[2119]: I1031 01:21:33.319094 2119 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8d6ffb46-9589-48d4-a13a-509b4aec6d5f" (UID: "8d6ffb46-9589-48d4-a13a-509b4aec6d5f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 01:21:33.319132 kubelet[2119]: I1031 01:21:33.319100 2119 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-kube-api-access-g74sd" (OuterVolumeSpecName: "kube-api-access-g74sd") pod "8d6ffb46-9589-48d4-a13a-509b4aec6d5f" (UID: "8d6ffb46-9589-48d4-a13a-509b4aec6d5f"). InnerVolumeSpecName "kube-api-access-g74sd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 01:21:33.417300 kubelet[2119]: I1031 01:21:33.417250 2119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g74sd\" (UniqueName: \"kubernetes.io/projected/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-kube-api-access-g74sd\") on node \"localhost\" DevicePath \"\"" Oct 31 01:21:33.417300 kubelet[2119]: I1031 01:21:33.417287 2119 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 01:21:33.417300 kubelet[2119]: I1031 01:21:33.417298 2119 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d6ffb46-9589-48d4-a13a-509b4aec6d5f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 01:21:33.559588 kubelet[2119]: E1031 01:21:33.559060 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:33.622109 kubelet[2119]: I1031 01:21:33.621921 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qwlfm" podStartSLOduration=3.198206833 podStartE2EDuration="21.621907625s" podCreationTimestamp="2025-10-31 01:21:12 +0000 UTC" firstStartedPulling="2025-10-31 01:21:14.29304197 +0000 UTC m=+22.118846225" lastFinishedPulling="2025-10-31 01:21:32.716742762 +0000 UTC m=+40.542547017" observedRunningTime="2025-10-31 01:21:33.621683374 +0000 UTC m=+41.447487629" watchObservedRunningTime="2025-10-31 01:21:33.621907625 +0000 UTC m=+41.447711880" Oct 31 01:21:33.721628 systemd[1]: run-netns-cni\x2d82c98614\x2d5ffc\x2d4bfb\x2d10af\x2d520cb0dc6266.mount: Deactivated successfully. Oct 31 01:21:33.721753 systemd[1]: var-lib-kubelet-pods-8d6ffb46\x2d9589\x2d48d4\x2da13a\x2d509b4aec6d5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg74sd.mount: Deactivated successfully. Oct 31 01:21:33.721846 systemd[1]: var-lib-kubelet-pods-8d6ffb46\x2d9589\x2d48d4\x2da13a\x2d509b4aec6d5f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 01:21:33.819925 kubelet[2119]: I1031 01:21:33.819795 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c73f2cd7-5e10-439e-b9c8-8be3e29282cb-whisker-backend-key-pair\") pod \"whisker-666d989cd4-28np7\" (UID: \"c73f2cd7-5e10-439e-b9c8-8be3e29282cb\") " pod="calico-system/whisker-666d989cd4-28np7" Oct 31 01:21:33.819925 kubelet[2119]: I1031 01:21:33.819838 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c73f2cd7-5e10-439e-b9c8-8be3e29282cb-whisker-ca-bundle\") pod \"whisker-666d989cd4-28np7\" (UID: \"c73f2cd7-5e10-439e-b9c8-8be3e29282cb\") " pod="calico-system/whisker-666d989cd4-28np7" Oct 31 01:21:33.819925 kubelet[2119]: I1031 01:21:33.819854 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fxr\" (UniqueName: \"kubernetes.io/projected/c73f2cd7-5e10-439e-b9c8-8be3e29282cb-kube-api-access-s7fxr\") pod \"whisker-666d989cd4-28np7\" (UID: \"c73f2cd7-5e10-439e-b9c8-8be3e29282cb\") " pod="calico-system/whisker-666d989cd4-28np7" Oct 31 01:21:33.984954 env[1318]: time="2025-10-31T01:21:33.984891436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666d989cd4-28np7,Uid:c73f2cd7-5e10-439e-b9c8-8be3e29282cb,Namespace:calico-system,Attempt:0,}" Oct 31 01:21:34.083046 systemd-networkd[1079]: cali8756d99b557: Link UP Oct 31 01:21:34.086624 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:21:34.086734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8756d99b557: link becomes ready Oct 31 01:21:34.086754 systemd-networkd[1079]: cali8756d99b557: Gained carrier Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.015 [INFO][3487] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.026 [INFO][3487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--666d989cd4--28np7-eth0 whisker-666d989cd4- calico-system c73f2cd7-5e10-439e-b9c8-8be3e29282cb 962 0 2025-10-31 01:21:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:666d989cd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-666d989cd4-28np7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8756d99b557 [] [] }} ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.026 [INFO][3487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.047 [INFO][3501] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" HandleID="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Workload="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.047 [INFO][3501] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" HandleID="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Workload="localhost-k8s-whisker--666d989cd4--28np7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-666d989cd4-28np7", "timestamp":"2025-10-31 01:21:34.047265367 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.047 [INFO][3501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.047 [INFO][3501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.047 [INFO][3501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.053 [INFO][3501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.058 [INFO][3501] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.062 [INFO][3501] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.064 [INFO][3501] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.065 [INFO][3501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.066 [INFO][3501] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.067 [INFO][3501] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.070 [INFO][3501] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.074 [INFO][3501] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.074 [INFO][3501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" host="localhost" Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.074 [INFO][3501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:34.098739 env[1318]: 2025-10-31 01:21:34.074 [INFO][3501] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" HandleID="k8s-pod-network.ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Workload="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.076 [INFO][3487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--666d989cd4--28np7-eth0", GenerateName:"whisker-666d989cd4-", Namespace:"calico-system", SelfLink:"", UID:"c73f2cd7-5e10-439e-b9c8-8be3e29282cb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"666d989cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-666d989cd4-28np7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8756d99b557", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.076 [INFO][3487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.076 [INFO][3487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8756d99b557 ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.086 [INFO][3487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.087 [INFO][3487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--666d989cd4--28np7-eth0", GenerateName:"whisker-666d989cd4-", Namespace:"calico-system", SelfLink:"", UID:"c73f2cd7-5e10-439e-b9c8-8be3e29282cb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"666d989cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a", Pod:"whisker-666d989cd4-28np7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8756d99b557", MAC:"4e:2f:f4:b7:ca:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:34.099357 env[1318]: 2025-10-31 01:21:34.097 [INFO][3487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a" Namespace="calico-system" Pod="whisker-666d989cd4-28np7" WorkloadEndpoint="localhost-k8s-whisker--666d989cd4--28np7-eth0" Oct 31 01:21:34.107755 env[1318]: time="2025-10-31T01:21:34.107691189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:34.107755 env[1318]: time="2025-10-31T01:21:34.107733990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:34.107755 env[1318]: time="2025-10-31T01:21:34.107744029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:34.108005 env[1318]: time="2025-10-31T01:21:34.107935027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a pid=3523 runtime=io.containerd.runc.v2 Oct 31 01:21:34.127412 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:34.148068 env[1318]: time="2025-10-31T01:21:34.148020315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666d989cd4-28np7,Uid:c73f2cd7-5e10-439e-b9c8-8be3e29282cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed73571e26f2a4caf822964841cdb9a3d10e0934af4d47b9265b026c9b63763a\"" Oct 31 01:21:34.150781 env[1318]: time="2025-10-31T01:21:34.149669682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:21:34.227000 audit[3617]: AVC avc: denied { write } for pid=3617 comm="tee" name="fd" dev="proc" ino=26702 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.227000 audit[3617]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd750247d9 a2=241 a3=1b6 items=1 ppid=3565 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.227000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 31 01:21:34.227000 audit: PATH item=0 name="/dev/fd/63" inode=25826 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.243000 audit[3634]: AVC avc: denied { write } for pid=3634 comm="tee" name="fd" dev="proc" ino=26712 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.243000 audit[3634]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7a1677ea a2=241 a3=1b6 items=1 ppid=3571 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.243000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 31 01:21:34.243000 audit: PATH item=0 name="/dev/fd/63" inode=25165 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.241000 audit[3631]: AVC avc: denied { write } for pid=3631 comm="tee" name="fd" dev="proc" ino=25175 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.241000 audit[3631]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe593337eb a2=241 a3=1b6 items=1 ppid=3566 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.241000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 31 01:21:34.241000 audit: PATH item=0 name="/dev/fd/63" inode=25164 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.256000 audit[3623]: AVC avc: denied { write } for pid=3623 comm="tee" name="fd" dev="proc" ino=25833 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.258000 audit[3647]: AVC avc: denied { write } for pid=3647 comm="tee" name="fd" dev="proc" ino=26716 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.258000 audit[3647]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0c57d7e9 a2=241 a3=1b6 items=1 ppid=3568 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.258000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 31 01:21:34.258000 audit: PATH item=0 name="/dev/fd/63" inode=24179 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.256000 audit[3623]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaa1067e9 a2=241 a3=1b6 items=1 ppid=3581 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.256000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 31 01:21:34.256000 audit: PATH item=0 name="/dev/fd/63" inode=26704 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.262000 audit[3642]: AVC avc: denied { write } for pid=3642 comm="tee" name="fd" dev="proc" ino=26720 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.262000 audit[3642]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdc61dc7e9 a2=241 a3=1b6 items=1 ppid=3577 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.262000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 31 01:21:34.262000 audit: PATH item=0 name="/dev/fd/63" inode=25173 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.290000 audit[3640]: AVC avc: denied { write } for pid=3640 comm="tee" name="fd" dev="proc" ino=26724 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:21:34.290000 audit[3640]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea9e9c7da a2=241 a3=1b6 items=1 ppid=3572 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.290000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 01:21:34.290000 audit: PATH item=0 name="/dev/fd/63" inode=26709 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:21:34.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:21:34.312344 kubelet[2119]: I1031 01:21:34.312300 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6ffb46-9589-48d4-a13a-509b4aec6d5f" path="/var/lib/kubelet/pods/8d6ffb46-9589-48d4-a13a-509b4aec6d5f/volumes" Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit: BPF prog-id=10 op=LOAD Oct 31 01:21:34.405000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e0ab970 a2=98 a3=1fffffffffffffff items=0 ppid=3582 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:21:34.405000 audit: BPF prog-id=10 op=UNLOAD Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.405000 audit: BPF prog-id=11 op=LOAD Oct 31 01:21:34.405000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e0ab850 a2=94 a3=3 items=0 ppid=3582 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:21:34.406000 audit: BPF prog-id=11 op=UNLOAD Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { bpf } for pid=3682 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit: BPF prog-id=12 op=LOAD Oct 31 01:21:34.406000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e0ab890 a2=94 a3=7fff8e0aba70 items=0 ppid=3582 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.406000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:21:34.406000 audit: BPF prog-id=12 op=UNLOAD Oct 31 01:21:34.406000 audit[3682]: AVC avc: denied { perfmon } for pid=3682 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.406000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff8e0ab960 a2=50 a3=a000000085 items=0 ppid=3582 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.406000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.408000 audit: BPF prog-id=13 op=LOAD Oct 31 01:21:34.408000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe42daa000 a2=98 a3=3 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.408000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.408000 audit: BPF prog-id=13 op=UNLOAD Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit: BPF prog-id=14 op=LOAD Oct 31 01:21:34.409000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe42da9df0 a2=94 a3=54428f items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.409000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.409000 audit: BPF prog-id=14 op=UNLOAD Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.409000 audit: BPF prog-id=15 op=LOAD Oct 31 01:21:34.409000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe42da9e20 a2=94 a3=2 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.409000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.409000 audit: BPF prog-id=15 op=UNLOAD Oct 31 01:21:34.493693 env[1318]: time="2025-10-31T01:21:34.493621392Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:34.500318 env[1318]: time="2025-10-31T01:21:34.500238643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:21:34.500598 kubelet[2119]: E1031 01:21:34.500547 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:21:34.500925 kubelet[2119]: E1031 01:21:34.500610 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:21:34.501911 kubelet[2119]: E1031 01:21:34.501875 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d59d10666e4b450bb44fb3ca0b0593f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:34.503905 env[1318]: time="2025-10-31T01:21:34.503848550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit: BPF prog-id=16 op=LOAD Oct 31 01:21:34.514000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe42da9ce0 a2=94 a3=1 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.514000 audit: BPF prog-id=16 op=UNLOAD Oct 31 01:21:34.514000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.514000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe42da9db0 a2=50 a3=7ffe42da9e90 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9cf0 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe42da9d20 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe42da9c30 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9d40 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9d20 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9d10 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9d40 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe42da9d20 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe42da9d40 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe42da9d10 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.522000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.522000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe42da9d80 a2=28 a3=0 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe42da9b30 a2=50 a3=1 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit: BPF prog-id=17 op=LOAD Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe42da9b30 a2=94 a3=5 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit: BPF prog-id=17 op=UNLOAD Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe42da9be0 a2=50 a3=1 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe42da9d00 a2=4 a3=38 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { confidentiality } for pid=3683 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe42da9d50 a2=94 a3=6 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { confidentiality } for pid=3683 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe42da9500 a2=94 a3=88 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.523000 audit[3683]: AVC avc: denied { confidentiality } for pid=3683 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.523000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe42da9500 a2=94 a3=88 items=0 ppid=3582 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit: BPF prog-id=18 op=LOAD Oct 31 01:21:34.530000 audit[3686]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd95ae0c0 a2=98 a3=1999999999999999 items=0 ppid=3582 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.530000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:21:34.530000 audit: BPF prog-id=18 op=UNLOAD Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit: BPF prog-id=19 op=LOAD Oct 31 01:21:34.530000 audit[3686]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd95adfa0 a2=94 a3=ffff items=0 ppid=3582 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.530000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:21:34.530000 audit: BPF prog-id=19 op=UNLOAD Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { perfmon } for pid=3686 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit[3686]: AVC avc: denied { bpf } for pid=3686 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.530000 audit: BPF prog-id=20 op=LOAD Oct 31 01:21:34.530000 audit[3686]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd95adfe0 a2=94 a3=7ffcd95ae1c0 items=0 ppid=3582 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.530000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:21:34.530000 audit: BPF prog-id=20 op=UNLOAD Oct 31 01:21:34.562022 kubelet[2119]: E1031 01:21:34.561992 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:34.580969 systemd-networkd[1079]: vxlan.calico: Link UP Oct 31 01:21:34.580976 systemd-networkd[1079]: vxlan.calico: Gained carrier Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.592000 audit: BPF prog-id=21 op=LOAD Oct 31 01:21:34.592000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52bba030 a2=98 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.592000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.593000 audit: BPF prog-id=21 op=UNLOAD Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.593000 audit: BPF prog-id=22 op=LOAD Oct 31 01:21:34.593000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52bb9e40 a2=94 a3=54428f items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.593000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.594000 audit: BPF prog-id=22 op=UNLOAD Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit: BPF prog-id=23 op=LOAD Oct 31 01:21:34.594000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52bb9e70 a2=94 a3=2 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.594000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.594000 audit: BPF prog-id=23 op=UNLOAD Oct 31 01:21:34.594000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.594000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9d40 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.594000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.595000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.595000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd52bb9d70 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.595000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.595000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.595000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd52bb9c80 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.595000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.595000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.595000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9d90 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.595000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.595000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.595000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9d70 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.595000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.596000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.596000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9d60 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.596000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.596000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.596000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9d90 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.596000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.596000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.596000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd52bb9d70 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.596000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.596000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.596000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd52bb9d90 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.596000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd52bb9d60 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd52bb9dd0 a2=28 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit: BPF prog-id=24 op=LOAD Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd52bb9c40 a2=94 a3=0 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit: BPF prog-id=24 op=UNLOAD Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd52bb9c30 a2=50 a3=2800 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd52bb9c30 a2=50 a3=2800 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit: BPF prog-id=25 op=LOAD Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd52bb9450 a2=94 a3=2 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.597000 audit: BPF prog-id=25 op=UNLOAD Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { perfmon } for pid=3728 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit[3728]: AVC avc: denied { bpf } for pid=3728 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.597000 audit: BPF prog-id=26 op=LOAD Oct 31 01:21:34.597000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd52bb9550 a2=94 a3=30 items=0 ppid=3582 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.601000 audit: BPF prog-id=27 op=LOAD Oct 31 01:21:34.601000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04a3c750 a2=98 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.601000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.603000 audit: BPF prog-id=27 op=UNLOAD Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.603000 audit: BPF prog-id=28 op=LOAD Oct 31 01:21:34.603000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc04a3c540 a2=94 a3=54428f items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.604000 audit: BPF prog-id=28 op=UNLOAD Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.604000 audit: BPF prog-id=29 op=LOAD Oct 31 01:21:34.604000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc04a3c570 a2=94 a3=2 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.605000 audit: BPF prog-id=29 op=UNLOAD Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.717000 audit: BPF prog-id=30 op=LOAD Oct 31 01:21:34.717000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc04a3c430 a2=94 a3=1 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.717000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.718000 audit: BPF prog-id=30 op=UNLOAD Oct 31 01:21:34.718000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.718000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc04a3c500 a2=50 a3=7ffc04a3c5e0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.718000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c440 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc04a3c470 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc04a3c380 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c490 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c470 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c460 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c490 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc04a3c470 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc04a3c490 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc04a3c460 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc04a3c4d0 a2=28 a3=0 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc04a3c280 a2=50 a3=1 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit: BPF prog-id=31 op=LOAD Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc04a3c280 a2=94 a3=5 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit: BPF prog-id=31 op=UNLOAD Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc04a3c330 a2=50 a3=1 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc04a3c450 a2=4 a3=38 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.729000 audit[3740]: AVC avc: denied { confidentiality } for pid=3740 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.729000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc04a3c4a0 a2=94 a3=6 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { confidentiality } for pid=3740 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc04a3bc50 a2=94 a3=88 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { perfmon } for pid=3740 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { confidentiality } for pid=3740 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc04a3bc50 a2=94 a3=88 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc04a3d680 a2=10 a3=208 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc04a3d520 a2=10 a3=3 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc04a3d4c0 a2=10 a3=3 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.730000 audit[3740]: AVC avc: denied { bpf } for pid=3740 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:21:34.730000 audit[3740]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc04a3d4c0 a2=10 a3=7 items=0 ppid=3582 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:21:34.737000 audit: BPF prog-id=26 op=UNLOAD Oct 31 01:21:34.777000 audit[3765]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3765 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:34.777000 audit[3765]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd39ef44b0 a2=0 a3=7ffd39ef449c items=0 ppid=3582 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.777000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:34.782000 audit[3766]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:34.782000 audit[3766]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffff1e3f50 a2=0 a3=7fffff1e3f3c items=0 ppid=3582 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.782000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:34.786000 audit[3764]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:34.786000 audit[3764]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffa97a6050 a2=0 a3=7fffa97a603c items=0 ppid=3582 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.786000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:34.791000 audit[3769]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=3769 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:34.791000 audit[3769]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffeff3fb6d0 a2=0 a3=7ffeff3fb6bc items=0 ppid=3582 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:34.791000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:34.843701 env[1318]: time="2025-10-31T01:21:34.843626654Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:34.849997 env[1318]: time="2025-10-31T01:21:34.849936940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:21:34.850228 kubelet[2119]: E1031 01:21:34.850169 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:21:34.850316 kubelet[2119]: E1031 01:21:34.850232 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:21:34.850439 kubelet[2119]: E1031 01:21:34.850362 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:34.851588 kubelet[2119]: E1031 01:21:34.851538 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:21:35.305569 systemd-networkd[1079]: cali8756d99b557: Gained IPv6LL Oct 31 01:21:35.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:54448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:35.529366 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:54448.service. Oct 31 01:21:35.531323 kernel: kauditd_printk_skb: 558 callbacks suppressed Oct 31 01:21:35.531407 kernel: audit: type=1130 audit(1761873695.528:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:54448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:35.559000 audit[3780]: USER_ACCT pid=3780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.560866 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:35.567563 kubelet[2119]: E1031 01:21:35.567533 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:35.576130 kernel: audit: type=1101 audit(1761873695.559:400): pid=3780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.576269 kernel: audit: type=1103 audit(1761873695.567:401): pid=3780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.567000 audit[3780]: CRED_ACQ pid=3780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.576365 kubelet[2119]: E1031 01:21:35.569738 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:21:35.569298 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:35.579434 systemd[1]: Started session-9.scope. Oct 31 01:21:35.581218 systemd-logind[1300]: New session 9 of user core. Oct 31 01:21:35.582581 kernel: audit: type=1006 audit(1761873695.567:402): pid=3780 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Oct 31 01:21:35.567000 audit[3780]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc2cd11b0 a2=3 a3=0 items=0 ppid=1 pid=3780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:35.593970 kernel: audit: type=1300 audit(1761873695.567:402): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc2cd11b0 a2=3 a3=0 items=0 ppid=1 pid=3780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:35.594024 kernel: audit: type=1327 audit(1761873695.567:402): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:35.567000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:35.607000 audit[3793]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:35.610154 systemd[1]: run-containerd-runc-k8s.io-04a929cd8ef9272fa81c1dbb60325ad7a3f1ea4b2848cfc93a7baf794c844df8-runc.Y6cbvq.mount: Deactivated successfully. Oct 31 01:21:35.613427 kernel: audit: type=1325 audit(1761873695.607:403): table=filter:107 family=2 entries=20 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:35.607000 audit[3780]: USER_START pid=3780 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.621450 kernel: audit: type=1105 audit(1761873695.607:404): pid=3780 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.607000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb6975410 a2=0 a3=7ffdb69753fc items=0 ppid=2244 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:35.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:35.637590 kernel: audit: type=1300 audit(1761873695.607:403): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb6975410 a2=0 a3=7ffdb69753fc items=0 ppid=2244 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:35.637631 kernel: audit: type=1327 audit(1761873695.607:403): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:35.612000 audit[3794]: CRED_ACQ pid=3794 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.631000 audit[3793]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:35.631000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb6975410 a2=0 a3=0 items=0 ppid=2244 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:35.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:35.726059 sshd[3780]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:35.725000 audit[3780]: USER_END pid=3780 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.725000 audit[3780]: CRED_DISP pid=3780 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:35.728418 systemd-logind[1300]: Session 9 logged out. Waiting for processes to exit. Oct 31 01:21:35.728692 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:54448.service: Deactivated successfully. Oct 31 01:21:35.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.140:22-10.0.0.1:54448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:35.729725 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 01:21:35.730673 systemd-logind[1300]: Removed session 9. Oct 31 01:21:36.311046 env[1318]: time="2025-10-31T01:21:36.310997971Z" level=info msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" Oct 31 01:21:36.330469 systemd-networkd[1079]: vxlan.calico: Gained IPv6LL Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" iface="eth0" netns="/var/run/netns/cni-acbd4533-258d-a067-2ed2-0d668ed96832" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" iface="eth0" netns="/var/run/netns/cni-acbd4533-258d-a067-2ed2-0d668ed96832" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" iface="eth0" netns="/var/run/netns/cni-acbd4533-258d-a067-2ed2-0d668ed96832" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.592 [INFO][3832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.614 [INFO][3841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.614 [INFO][3841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.614 [INFO][3841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.619 [WARNING][3841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.619 [INFO][3841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.621 [INFO][3841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:36.624643 env[1318]: 2025-10-31 01:21:36.622 [INFO][3832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:36.625068 env[1318]: time="2025-10-31T01:21:36.624698523Z" level=info msg="TearDown network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" successfully" Oct 31 01:21:36.625068 env[1318]: time="2025-10-31T01:21:36.624729932Z" level=info msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" returns successfully" Oct 31 01:21:36.625461 env[1318]: time="2025-10-31T01:21:36.625416050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vzlbq,Uid:7147f3bc-4883-48d8-85dc-189c66dbfbd3,Namespace:calico-system,Attempt:1,}" Oct 31 01:21:36.626983 systemd[1]: run-netns-cni\x2dacbd4533\x2d258d\x2da067\x2d2ed2\x2d0d668ed96832.mount: Deactivated successfully. Oct 31 01:21:36.731002 systemd-networkd[1079]: cali05797aed71b: Link UP Oct 31 01:21:36.734561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:21:36.734911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali05797aed71b: link becomes ready Oct 31 01:21:36.734613 systemd-networkd[1079]: cali05797aed71b: Gained carrier Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.676 [INFO][3850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vzlbq-eth0 goldmane-666569f655- calico-system 7147f3bc-4883-48d8-85dc-189c66dbfbd3 1008 0 2025-10-31 01:21:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vzlbq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali05797aed71b [] [] }} ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.676 [INFO][3850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.697 [INFO][3865] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" HandleID="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.697 [INFO][3865] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" HandleID="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vzlbq", "timestamp":"2025-10-31 01:21:36.697804662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.697 [INFO][3865] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.697 [INFO][3865] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.698 [INFO][3865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.704 [INFO][3865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.709 [INFO][3865] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.712 [INFO][3865] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.714 [INFO][3865] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.716 [INFO][3865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.716 [INFO][3865] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.717 [INFO][3865] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392 Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.720 [INFO][3865] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.727 [INFO][3865] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.727 [INFO][3865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" host="localhost" Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.727 [INFO][3865] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:36.747712 env[1318]: 2025-10-31 01:21:36.727 [INFO][3865] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" HandleID="k8s-pod-network.6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.729 [INFO][3850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vzlbq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7147f3bc-4883-48d8-85dc-189c66dbfbd3", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vzlbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05797aed71b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.729 [INFO][3850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.729 [INFO][3850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05797aed71b ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.734 [INFO][3850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.735 [INFO][3850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vzlbq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7147f3bc-4883-48d8-85dc-189c66dbfbd3", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392", Pod:"goldmane-666569f655-vzlbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05797aed71b", MAC:"a2:90:93:2a:fd:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:36.748312 env[1318]: 2025-10-31 01:21:36.745 [INFO][3850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392" Namespace="calico-system" Pod="goldmane-666569f655-vzlbq" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:36.758266 env[1318]: time="2025-10-31T01:21:36.758195294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:36.758266 env[1318]: time="2025-10-31T01:21:36.758232724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:36.758266 env[1318]: time="2025-10-31T01:21:36.758243534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:36.758502 env[1318]: time="2025-10-31T01:21:36.758413744Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392 pid=3891 runtime=io.containerd.runc.v2 Oct 31 01:21:36.759000 audit[3900]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:36.759000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffee7b69520 a2=0 a3=7ffee7b6950c items=0 ppid=3582 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:36.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:36.782252 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:36.814646 env[1318]: time="2025-10-31T01:21:36.814595947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vzlbq,Uid:7147f3bc-4883-48d8-85dc-189c66dbfbd3,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392\"" Oct 31 01:21:36.816804 env[1318]: time="2025-10-31T01:21:36.816783263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:21:37.176323 env[1318]: time="2025-10-31T01:21:37.176253595Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:37.177531 env[1318]: time="2025-10-31T01:21:37.177458226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:21:37.177777 kubelet[2119]: E1031 01:21:37.177732 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:21:37.178106 kubelet[2119]: E1031 01:21:37.177795 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:21:37.178106 kubelet[2119]: E1031 01:21:37.177984 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vzlbq_calico-system(7147f3bc-4883-48d8-85dc-189c66dbfbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:37.180064 kubelet[2119]: E1031 01:21:37.180039 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:37.311303 env[1318]: time="2025-10-31T01:21:37.311164847Z" level=info msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" iface="eth0" netns="/var/run/netns/cni-c8b3e8f5-65ca-1107-38da-c0635476a3ad" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" iface="eth0" netns="/var/run/netns/cni-c8b3e8f5-65ca-1107-38da-c0635476a3ad" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" iface="eth0" netns="/var/run/netns/cni-c8b3e8f5-65ca-1107-38da-c0635476a3ad" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.348 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.363 [INFO][3946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.364 [INFO][3946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.364 [INFO][3946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.369 [WARNING][3946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.370 [INFO][3946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.373 [INFO][3946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:37.376519 env[1318]: 2025-10-31 01:21:37.374 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:37.377435 env[1318]: time="2025-10-31T01:21:37.377389053Z" level=info msg="TearDown network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" successfully" Oct 31 01:21:37.377435 env[1318]: time="2025-10-31T01:21:37.377425732Z" level=info msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" returns successfully" Oct 31 01:21:37.377736 kubelet[2119]: E1031 01:21:37.377711 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:37.378096 env[1318]: time="2025-10-31T01:21:37.378049723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnsbn,Uid:30ef7351-e113-44f3-84eb-f1e0f60f06cf,Namespace:kube-system,Attempt:1,}" Oct 31 01:21:37.463036 systemd-networkd[1079]: cali4f06b77d8c3: Link UP Oct 31 01:21:37.465228 systemd-networkd[1079]: cali4f06b77d8c3: Gained carrier Oct 31 01:21:37.465458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4f06b77d8c3: link becomes ready Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.415 [INFO][3956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0 coredns-668d6bf9bc- kube-system 30ef7351-e113-44f3-84eb-f1e0f60f06cf 1019 0 2025-10-31 01:20:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rnsbn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f06b77d8c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.415 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.432 [INFO][3969] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" HandleID="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.433 [INFO][3969] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" HandleID="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rnsbn", "timestamp":"2025-10-31 01:21:37.432918996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.433 [INFO][3969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.433 [INFO][3969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.433 [INFO][3969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.438 [INFO][3969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.443 [INFO][3969] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.446 [INFO][3969] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.448 [INFO][3969] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.449 [INFO][3969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.449 [INFO][3969] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.451 [INFO][3969] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0 Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.453 [INFO][3969] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.459 [INFO][3969] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.459 [INFO][3969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" host="localhost" Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.459 [INFO][3969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:37.478109 env[1318]: 2025-10-31 01:21:37.459 [INFO][3969] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" HandleID="k8s-pod-network.b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.461 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30ef7351-e113-44f3-84eb-f1e0f60f06cf", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rnsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f06b77d8c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.461 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.461 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f06b77d8c3 ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.465 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.466 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30ef7351-e113-44f3-84eb-f1e0f60f06cf", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0", Pod:"coredns-668d6bf9bc-rnsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f06b77d8c3", MAC:"b2:ec:05:63:0b:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:37.478977 env[1318]: 2025-10-31 01:21:37.475 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnsbn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:37.486772 env[1318]: time="2025-10-31T01:21:37.486706409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:37.486929 env[1318]: time="2025-10-31T01:21:37.486803000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:37.486929 env[1318]: time="2025-10-31T01:21:37.486833086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:37.487134 env[1318]: time="2025-10-31T01:21:37.487079989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0 pid=3992 runtime=io.containerd.runc.v2 Oct 31 01:21:37.485000 audit[3994]: NETFILTER_CFG table=filter:110 family=2 entries=46 op=nft_register_chain pid=3994 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:37.485000 audit[3994]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffff67ec760 a2=0 a3=7ffff67ec74c items=0 ppid=3582 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:37.485000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:37.506026 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:37.528585 env[1318]: time="2025-10-31T01:21:37.528537671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnsbn,Uid:30ef7351-e113-44f3-84eb-f1e0f60f06cf,Namespace:kube-system,Attempt:1,} returns sandbox id \"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0\"" Oct 31 01:21:37.529431 kubelet[2119]: E1031 01:21:37.529411 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:37.531495 env[1318]: time="2025-10-31T01:21:37.531447933Z" level=info msg="CreateContainer within sandbox \"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:21:37.547404 env[1318]: time="2025-10-31T01:21:37.547323310Z" level=info msg="CreateContainer within sandbox \"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12e2015fd6464e1e310f7f1609e299f5ad137b8530b5ea08bbefedd843c13154\"" Oct 31 01:21:37.547785 env[1318]: time="2025-10-31T01:21:37.547754960Z" level=info msg="StartContainer for \"12e2015fd6464e1e310f7f1609e299f5ad137b8530b5ea08bbefedd843c13154\"" Oct 31 01:21:37.569805 kubelet[2119]: E1031 01:21:37.569760 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:37.597489 env[1318]: time="2025-10-31T01:21:37.597301703Z" level=info msg="StartContainer for \"12e2015fd6464e1e310f7f1609e299f5ad137b8530b5ea08bbefedd843c13154\" returns successfully" Oct 31 01:21:37.597000 audit[4061]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=4061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:37.597000 audit[4061]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb32e9890 a2=0 a3=7ffeb32e987c items=0 ppid=2244 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:37.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:37.601000 audit[4061]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:37.601000 audit[4061]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeb32e9890 a2=0 a3=0 items=0 ppid=2244 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:37.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:37.630829 systemd[1]: run-netns-cni\x2dc8b3e8f5\x2d65ca\x2d1107\x2d38da\x2dc0635476a3ad.mount: Deactivated successfully. Oct 31 01:21:37.865586 systemd-networkd[1079]: cali05797aed71b: Gained IPv6LL Oct 31 01:21:38.311499 env[1318]: time="2025-10-31T01:21:38.311197913Z" level=info msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.354 [INFO][4081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.354 [INFO][4081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" iface="eth0" netns="/var/run/netns/cni-2261b14b-a981-e29f-0226-7ed93f6dd1a1" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.355 [INFO][4081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" iface="eth0" netns="/var/run/netns/cni-2261b14b-a981-e29f-0226-7ed93f6dd1a1" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.355 [INFO][4081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" iface="eth0" netns="/var/run/netns/cni-2261b14b-a981-e29f-0226-7ed93f6dd1a1" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.355 [INFO][4081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.355 [INFO][4081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.372 [INFO][4090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.373 [INFO][4090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.373 [INFO][4090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.378 [WARNING][4090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.378 [INFO][4090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.379 [INFO][4090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:38.382359 env[1318]: 2025-10-31 01:21:38.380 [INFO][4081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:38.382895 env[1318]: time="2025-10-31T01:21:38.382606311Z" level=info msg="TearDown network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" successfully" Oct 31 01:21:38.382895 env[1318]: time="2025-10-31T01:21:38.382645886Z" level=info msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" returns successfully" Oct 31 01:21:38.383547 env[1318]: time="2025-10-31T01:21:38.383525247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9l4v,Uid:9ef33ba9-4950-4b3a-9079-7b7964e46235,Namespace:calico-system,Attempt:1,}" Oct 31 01:21:38.384989 systemd[1]: run-netns-cni\x2d2261b14b\x2da981\x2de29f\x2d0226\x2d7ed93f6dd1a1.mount: Deactivated successfully. Oct 31 01:21:38.505564 systemd-networkd[1079]: cali4f06b77d8c3: Gained IPv6LL Oct 31 01:21:38.573002 kubelet[2119]: E1031 01:21:38.572109 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:38.573002 kubelet[2119]: E1031 01:21:38.572485 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:39.013000 audit[4110]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:39.013000 audit[4110]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc7fb5a8d0 a2=0 a3=7ffc7fb5a8bc items=0 ppid=2244 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:39.019000 audit[4110]: NETFILTER_CFG table=nat:114 family=2 entries=14 op=nft_register_rule pid=4110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:39.019000 audit[4110]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc7fb5a8d0 a2=0 a3=0 items=0 ppid=2244 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:39.092024 systemd-networkd[1079]: cali0a6cbb2f06e: Link UP Oct 31 01:21:39.095696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:21:39.095862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0a6cbb2f06e: link becomes ready Oct 31 01:21:39.095843 systemd-networkd[1079]: cali0a6cbb2f06e: Gained carrier Oct 31 01:21:39.104891 kubelet[2119]: I1031 01:21:39.104827 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rnsbn" podStartSLOduration=42.104804305 podStartE2EDuration="42.104804305s" podCreationTimestamp="2025-10-31 01:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:21:38.991957264 +0000 UTC m=+46.817761539" watchObservedRunningTime="2025-10-31 01:21:39.104804305 +0000 UTC m=+46.930608560" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.035 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b9l4v-eth0 csi-node-driver- calico-system 9ef33ba9-4950-4b3a-9079-7b7964e46235 1036 0 2025-10-31 01:21:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b9l4v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0a6cbb2f06e [] [] }} ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.035 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.059 [INFO][4114] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" HandleID="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.059 [INFO][4114] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" HandleID="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b9l4v", "timestamp":"2025-10-31 01:21:39.059277765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.059 [INFO][4114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.059 [INFO][4114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.059 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.065 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.069 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.073 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.075 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.077 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.077 [INFO][4114] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.078 [INFO][4114] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57 Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.081 [INFO][4114] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.088 [INFO][4114] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.088 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" host="localhost" Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.088 [INFO][4114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:39.106639 env[1318]: 2025-10-31 01:21:39.088 [INFO][4114] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" HandleID="k8s-pod-network.0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.090 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9l4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef33ba9-4950-4b3a-9079-7b7964e46235", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b9l4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a6cbb2f06e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.090 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.090 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a6cbb2f06e ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.096 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.096 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9l4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef33ba9-4950-4b3a-9079-7b7964e46235", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57", Pod:"csi-node-driver-b9l4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a6cbb2f06e", MAC:"c2:b1:5f:5c:7b:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:39.107203 env[1318]: 2025-10-31 01:21:39.105 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57" Namespace="calico-system" Pod="csi-node-driver-b9l4v" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:39.118461 env[1318]: time="2025-10-31T01:21:39.118154028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:39.118461 env[1318]: time="2025-10-31T01:21:39.118203501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:39.118461 env[1318]: time="2025-10-31T01:21:39.118216595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:39.118461 env[1318]: time="2025-10-31T01:21:39.118393478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57 pid=4140 runtime=io.containerd.runc.v2 Oct 31 01:21:39.119000 audit[4145]: NETFILTER_CFG table=filter:115 family=2 entries=50 op=nft_register_chain pid=4145 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:39.119000 audit[4145]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffd9e96ce20 a2=0 a3=7ffd9e96ce0c items=0 ppid=3582 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.119000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:39.146515 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:39.159785 env[1318]: time="2025-10-31T01:21:39.159737738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9l4v,Uid:9ef33ba9-4950-4b3a-9079-7b7964e46235,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57\"" Oct 31 01:21:39.161569 env[1318]: time="2025-10-31T01:21:39.161234317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:21:39.310659 env[1318]: time="2025-10-31T01:21:39.310615722Z" level=info msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" Oct 31 01:21:39.494417 env[1318]: time="2025-10-31T01:21:39.494329177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" iface="eth0" netns="/var/run/netns/cni-7955f937-7209-6dd1-ed75-3b9dc8509961" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" iface="eth0" netns="/var/run/netns/cni-7955f937-7209-6dd1-ed75-3b9dc8509961" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" iface="eth0" netns="/var/run/netns/cni-7955f937-7209-6dd1-ed75-3b9dc8509961" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.496 [INFO][4188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.513 [INFO][4197] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.513 [INFO][4197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.513 [INFO][4197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.519 [WARNING][4197] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.519 [INFO][4197] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.521 [INFO][4197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:39.524216 env[1318]: 2025-10-31 01:21:39.522 [INFO][4188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:39.524734 env[1318]: time="2025-10-31T01:21:39.524350738Z" level=info msg="TearDown network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" successfully" Oct 31 01:21:39.524734 env[1318]: time="2025-10-31T01:21:39.524390392Z" level=info msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" returns successfully" Oct 31 01:21:39.525105 env[1318]: time="2025-10-31T01:21:39.525047265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-2pcg2,Uid:06e5831d-75dc-4025-8be9-9be7b711ddfe,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:21:39.526811 systemd[1]: run-netns-cni\x2d7955f937\x2d7209\x2d6dd1\x2ded75\x2d3b9dc8509961.mount: Deactivated successfully. Oct 31 01:21:39.530552 env[1318]: time="2025-10-31T01:21:39.530515317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:21:39.530763 kubelet[2119]: E1031 01:21:39.530704 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:21:39.530763 kubelet[2119]: E1031 01:21:39.530756 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:21:39.530925 kubelet[2119]: E1031 01:21:39.530878 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:39.532753 env[1318]: time="2025-10-31T01:21:39.532730384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:21:39.575920 kubelet[2119]: E1031 01:21:39.574577 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:39.608000 audit[4218]: NETFILTER_CFG table=filter:116 family=2 entries=17 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:39.608000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa4053f20 a2=0 a3=7fffa4053f0c items=0 ppid=2244 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:39.614000 audit[4218]: NETFILTER_CFG table=nat:117 family=2 entries=35 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:39.614000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffa4053f20 a2=0 a3=7fffa4053f0c items=0 ppid=2244 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:39.684274 systemd-networkd[1079]: calid804a043560: Link UP Oct 31 01:21:39.686535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid804a043560: link becomes ready Oct 31 01:21:39.686400 systemd-networkd[1079]: calid804a043560: Gained carrier Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.624 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0 calico-apiserver-5df7bf54df- calico-apiserver 06e5831d-75dc-4025-8be9-9be7b711ddfe 1051 0 2025-10-31 01:21:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df7bf54df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5df7bf54df-2pcg2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid804a043560 [] [] }} ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.624 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.649 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" HandleID="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.650 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" HandleID="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5df7bf54df-2pcg2", "timestamp":"2025-10-31 01:21:39.649985116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.650 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.650 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.650 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.655 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.658 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.666 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.668 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.670 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.670 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.671 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.674 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.680 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.680 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" host="localhost" Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.680 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:39.706069 env[1318]: 2025-10-31 01:21:39.680 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" HandleID="k8s-pod-network.7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.682 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"06e5831d-75dc-4025-8be9-9be7b711ddfe", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5df7bf54df-2pcg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid804a043560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.682 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.682 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid804a043560 ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.686 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.688 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"06e5831d-75dc-4025-8be9-9be7b711ddfe", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b", Pod:"calico-apiserver-5df7bf54df-2pcg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid804a043560", MAC:"52:b6:4c:32:99:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:39.707139 env[1318]: 2025-10-31 01:21:39.703 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-2pcg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:39.716496 env[1318]: time="2025-10-31T01:21:39.716334954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:39.716496 env[1318]: time="2025-10-31T01:21:39.716379358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:39.716496 env[1318]: time="2025-10-31T01:21:39.716413572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:39.718406 env[1318]: time="2025-10-31T01:21:39.717665021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b pid=4243 runtime=io.containerd.runc.v2 Oct 31 01:21:39.720000 audit[4252]: NETFILTER_CFG table=filter:118 family=2 entries=58 op=nft_register_chain pid=4252 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:39.720000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=30568 a0=3 a1=7ffc6a2458f0 a2=0 a3=7ffc6a2458dc items=0 ppid=3582 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:39.720000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:39.740216 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:39.763111 env[1318]: time="2025-10-31T01:21:39.763010389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-2pcg2,Uid:06e5831d-75dc-4025-8be9-9be7b711ddfe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b\"" Oct 31 01:21:39.884100 env[1318]: time="2025-10-31T01:21:39.883962570Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:39.913775 env[1318]: time="2025-10-31T01:21:39.913684039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:21:39.914044 kubelet[2119]: E1031 01:21:39.913992 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:21:39.914044 kubelet[2119]: E1031 01:21:39.914058 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:21:39.914404 kubelet[2119]: E1031 01:21:39.914308 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:39.914686 env[1318]: time="2025-10-31T01:21:39.914470695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:21:39.915966 kubelet[2119]: E1031 01:21:39.915907 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:40.258202 env[1318]: time="2025-10-31T01:21:40.258004192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:40.259889 env[1318]: time="2025-10-31T01:21:40.259806885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:21:40.260285 kubelet[2119]: E1031 01:21:40.260214 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:40.260486 kubelet[2119]: E1031 01:21:40.260295 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:40.260608 kubelet[2119]: E1031 01:21:40.260542 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xllgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-2pcg2_calico-apiserver(06e5831d-75dc-4025-8be9-9be7b711ddfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:40.261822 kubelet[2119]: E1031 01:21:40.261757 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:40.310873 env[1318]: time="2025-10-31T01:21:40.310829405Z" level=info msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.353 [INFO][4293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.353 [INFO][4293] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" iface="eth0" netns="/var/run/netns/cni-6bbdfc8d-88af-aa6d-4173-661c45c74efb" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.354 [INFO][4293] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" iface="eth0" netns="/var/run/netns/cni-6bbdfc8d-88af-aa6d-4173-661c45c74efb" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.354 [INFO][4293] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" iface="eth0" netns="/var/run/netns/cni-6bbdfc8d-88af-aa6d-4173-661c45c74efb" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.354 [INFO][4293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.354 [INFO][4293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.373 [INFO][4302] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.373 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.373 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.379 [WARNING][4302] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.379 [INFO][4302] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.380 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:40.383236 env[1318]: 2025-10-31 01:21:40.381 [INFO][4293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:40.383795 env[1318]: time="2025-10-31T01:21:40.383380916Z" level=info msg="TearDown network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" successfully" Oct 31 01:21:40.383795 env[1318]: time="2025-10-31T01:21:40.383437171Z" level=info msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" returns successfully" Oct 31 01:21:40.384177 env[1318]: time="2025-10-31T01:21:40.384150240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b466566-mfnxs,Uid:aa2fbf03-d734-4df0-9482-3da8a7ab55e1,Namespace:calico-system,Attempt:1,}" Oct 31 01:21:40.386533 systemd[1]: run-netns-cni\x2d6bbdfc8d\x2d88af\x2daa6d\x2d4173\x2d661c45c74efb.mount: Deactivated successfully. Oct 31 01:21:40.576910 kubelet[2119]: E1031 01:21:40.576664 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:40.577354 kubelet[2119]: E1031 01:21:40.577229 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:40.577485 kubelet[2119]: E1031 01:21:40.577464 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:40.617542 systemd-networkd[1079]: cali0a6cbb2f06e: Gained IPv6LL Oct 31 01:21:40.729541 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:38632.service. Oct 31 01:21:40.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:38632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:40.731874 kernel: kauditd_printk_skb: 37 callbacks suppressed Oct 31 01:21:40.731918 kernel: audit: type=1130 audit(1761873700.728:420): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:38632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:40.763000 audit[4333]: USER_ACCT pid=4333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.765457 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 38632 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:40.770900 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:40.769000 audit[4333]: CRED_ACQ pid=4333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.776285 systemd-logind[1300]: New session 10 of user core. Oct 31 01:21:40.776966 systemd[1]: Started session-10.scope. Oct 31 01:21:40.781081 kernel: audit: type=1101 audit(1761873700.763:421): pid=4333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.781149 kernel: audit: type=1103 audit(1761873700.769:422): pid=4333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.785768 kernel: audit: type=1006 audit(1761873700.769:423): pid=4333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 31 01:21:40.785829 kernel: audit: type=1300 audit(1761873700.769:423): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc180b0360 a2=3 a3=0 items=0 ppid=1 pid=4333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:40.769000 audit[4333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc180b0360 a2=3 a3=0 items=0 ppid=1 pid=4333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:40.793276 kernel: audit: type=1327 audit(1761873700.769:423): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:40.769000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:40.795816 kernel: audit: type=1105 audit(1761873700.780:424): pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.780000 audit[4333]: USER_START pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.781000 audit[4336]: CRED_ACQ pid=4336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.812418 kernel: audit: type=1103 audit(1761873700.781:425): pid=4336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:40.812476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:21:40.813857 systemd-networkd[1079]: cali3803d71bd03: Link UP Oct 31 01:21:40.816012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3803d71bd03: link becomes ready Oct 31 01:21:40.816162 systemd-networkd[1079]: cali3803d71bd03: Gained carrier Oct 31 01:21:40.816334 systemd-networkd[1079]: calid804a043560: Gained IPv6LL Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.426 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0 calico-kube-controllers-86b466566- calico-system aa2fbf03-d734-4df0-9482-3da8a7ab55e1 1075 0 2025-10-31 01:21:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86b466566 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86b466566-mfnxs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3803d71bd03 [] [] }} ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.426 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.450 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" HandleID="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.450 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" HandleID="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86b466566-mfnxs", "timestamp":"2025-10-31 01:21:40.450326757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.450 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.450 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.450 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.456 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.460 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.464 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.465 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.467 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.467 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.468 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.472 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.802 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.802 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" host="localhost" Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.803 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:41.167493 env[1318]: 2025-10-31 01:21:40.803 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" HandleID="k8s-pod-network.59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:40.805 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0", GenerateName:"calico-kube-controllers-86b466566-", Namespace:"calico-system", SelfLink:"", UID:"aa2fbf03-d734-4df0-9482-3da8a7ab55e1", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b466566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86b466566-mfnxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3803d71bd03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:40.805 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:40.805 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3803d71bd03 ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:40.816 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:40.816 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0", GenerateName:"calico-kube-controllers-86b466566-", Namespace:"calico-system", SelfLink:"", UID:"aa2fbf03-d734-4df0-9482-3da8a7ab55e1", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b466566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca", Pod:"calico-kube-controllers-86b466566-mfnxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3803d71bd03", MAC:"52:3d:2b:2e:17:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:41.171728 env[1318]: 2025-10-31 01:21:41.160 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca" Namespace="calico-system" Pod="calico-kube-controllers-86b466566-mfnxs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:41.175259 sshd[4333]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:41.175000 audit[4333]: USER_END pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:41.188906 kernel: audit: type=1106 audit(1761873701.175:426): pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:41.179000 audit[4333]: CRED_DISP pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:41.196140 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:38632.service: Deactivated successfully. Oct 31 01:21:41.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.140:22-10.0.0.1:38632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:41.198508 kernel: audit: type=1104 audit(1761873701.179:427): pid=4333 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:41.198366 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 01:21:41.199003 systemd-logind[1300]: Session 10 logged out. Waiting for processes to exit. Oct 31 01:21:41.199767 systemd-logind[1300]: Removed session 10. Oct 31 01:21:41.201937 env[1318]: time="2025-10-31T01:21:41.201855181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:41.202066 env[1318]: time="2025-10-31T01:21:41.201959818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:41.202066 env[1318]: time="2025-10-31T01:21:41.201987239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:41.202179 env[1318]: time="2025-10-31T01:21:41.202143602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca pid=4361 runtime=io.containerd.runc.v2 Oct 31 01:21:41.206000 audit[4373]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=4373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:41.206000 audit[4373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc68e2630 a2=0 a3=7ffcc68e261c items=0 ppid=2244 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:41.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:41.212000 audit[4373]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:41.212000 audit[4373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc68e2630 a2=0 a3=7ffcc68e261c items=0 ppid=2244 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:41.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:41.224000 audit[4367]: NETFILTER_CFG table=filter:121 family=2 entries=48 op=nft_register_chain pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:41.224000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=23124 a0=3 a1=7fffa8f4da50 a2=0 a3=7fffa8f4da3c items=0 ppid=3582 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:41.224000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:41.236421 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:41.269517 env[1318]: time="2025-10-31T01:21:41.269460756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b466566-mfnxs,Uid:aa2fbf03-d734-4df0-9482-3da8a7ab55e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca\"" Oct 31 01:21:41.270769 env[1318]: time="2025-10-31T01:21:41.270747500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:21:41.310682 env[1318]: time="2025-10-31T01:21:41.310633187Z" level=info msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" Oct 31 01:21:41.310937 env[1318]: time="2025-10-31T01:21:41.310633197Z" level=info msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" iface="eth0" netns="/var/run/netns/cni-9123a036-812e-a8b4-8d08-1fbba5109131" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" iface="eth0" netns="/var/run/netns/cni-9123a036-812e-a8b4-8d08-1fbba5109131" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" iface="eth0" netns="/var/run/netns/cni-9123a036-812e-a8b4-8d08-1fbba5109131" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.354 [INFO][4421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.375 [INFO][4435] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.375 [INFO][4435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.375 [INFO][4435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.560 [WARNING][4435] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.560 [INFO][4435] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.561 [INFO][4435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:41.564590 env[1318]: 2025-10-31 01:21:41.563 [INFO][4421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:41.565080 env[1318]: time="2025-10-31T01:21:41.564838031Z" level=info msg="TearDown network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" successfully" Oct 31 01:21:41.565080 env[1318]: time="2025-10-31T01:21:41.564871173Z" level=info msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" returns successfully" Oct 31 01:21:41.565854 env[1318]: time="2025-10-31T01:21:41.565824342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-pqphd,Uid:f883da0a-4f39-47f1-824b-f2e94084a2d5,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:21:41.568186 systemd[1]: run-netns-cni\x2d9123a036\x2d812e\x2da8b4\x2d8d08\x2d1fbba5109131.mount: Deactivated successfully. Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.355 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.356 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" iface="eth0" netns="/var/run/netns/cni-b537defd-f938-5b6e-0938-2bcc7c61222e" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.356 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" iface="eth0" netns="/var/run/netns/cni-b537defd-f938-5b6e-0938-2bcc7c61222e" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.358 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" iface="eth0" netns="/var/run/netns/cni-b537defd-f938-5b6e-0938-2bcc7c61222e" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.358 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.359 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.382 [INFO][4442] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.382 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.561 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.569 [WARNING][4442] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.569 [INFO][4442] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.571 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:41.574159 env[1318]: 2025-10-31 01:21:41.572 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:41.574549 env[1318]: time="2025-10-31T01:21:41.574310056Z" level=info msg="TearDown network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" successfully" Oct 31 01:21:41.574549 env[1318]: time="2025-10-31T01:21:41.574342517Z" level=info msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" returns successfully" Oct 31 01:21:41.574700 kubelet[2119]: E1031 01:21:41.574675 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:41.575442 env[1318]: time="2025-10-31T01:21:41.575413677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgcnx,Uid:03e81ffc-7bc8-4496-a870-f2e322aeb1d9,Namespace:kube-system,Attempt:1,}" Oct 31 01:21:41.577893 systemd[1]: run-netns-cni\x2db537defd\x2df938\x2d5b6e\x2d0938\x2d2bcc7c61222e.mount: Deactivated successfully. Oct 31 01:21:41.579804 kubelet[2119]: E1031 01:21:41.579212 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:41.579804 kubelet[2119]: E1031 01:21:41.579493 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:41.595933 env[1318]: time="2025-10-31T01:21:41.595892606Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:41.751197 env[1318]: time="2025-10-31T01:21:41.751122932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:21:41.753544 kubelet[2119]: E1031 01:21:41.753507 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:21:41.753544 kubelet[2119]: E1031 01:21:41.753557 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:21:41.753795 kubelet[2119]: E1031 01:21:41.753716 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-784c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b466566-mfnxs_calico-system(aa2fbf03-d734-4df0-9482-3da8a7ab55e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:41.754910 kubelet[2119]: E1031 01:21:41.754842 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:21:41.880149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:21:41.880251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia437b6aa7a6: link becomes ready Oct 31 01:21:41.881551 systemd-networkd[1079]: calia437b6aa7a6: Link UP Oct 31 01:21:41.881970 systemd-networkd[1079]: calia437b6aa7a6: Gained carrier Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.809 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0 coredns-668d6bf9bc- kube-system 03e81ffc-7bc8-4496-a870-f2e322aeb1d9 1102 0 2025-10-31 01:20:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lgcnx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia437b6aa7a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.809 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.839 [INFO][4487] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" HandleID="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.839 [INFO][4487] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" HandleID="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325d30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lgcnx", "timestamp":"2025-10-31 01:21:41.839077613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.839 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.839 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.839 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.845 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.852 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.856 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.858 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.860 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.860 [INFO][4487] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.861 [INFO][4487] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.864 [INFO][4487] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.871 [INFO][4487] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.871 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" host="localhost" Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.871 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:41.895467 env[1318]: 2025-10-31 01:21:41.871 [INFO][4487] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" HandleID="k8s-pod-network.92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.874 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03e81ffc-7bc8-4496-a870-f2e322aeb1d9", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lgcnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia437b6aa7a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.874 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.874 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia437b6aa7a6 ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.880 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.882 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03e81ffc-7bc8-4496-a870-f2e322aeb1d9", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d", Pod:"coredns-668d6bf9bc-lgcnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia437b6aa7a6", MAC:"52:74:c9:0c:d0:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:41.896918 env[1318]: 2025-10-31 01:21:41.891 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgcnx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:41.898520 systemd-networkd[1079]: cali3803d71bd03: Gained IPv6LL Oct 31 01:21:41.905487 env[1318]: time="2025-10-31T01:21:41.904173728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:41.905487 env[1318]: time="2025-10-31T01:21:41.904206539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:41.905487 env[1318]: time="2025-10-31T01:21:41.904215616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:41.905487 env[1318]: time="2025-10-31T01:21:41.904371188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d pid=4519 runtime=io.containerd.runc.v2 Oct 31 01:21:41.908000 audit[4531]: NETFILTER_CFG table=filter:122 family=2 entries=48 op=nft_register_chain pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:41.908000 audit[4531]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffe5d21aa50 a2=0 a3=7ffe5d21aa3c items=0 ppid=3582 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:41.908000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:41.927078 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:41.949792 env[1318]: time="2025-10-31T01:21:41.949750193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgcnx,Uid:03e81ffc-7bc8-4496-a870-f2e322aeb1d9,Namespace:kube-system,Attempt:1,} returns sandbox id \"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d\"" Oct 31 01:21:41.950784 kubelet[2119]: E1031 01:21:41.950627 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:41.952867 env[1318]: time="2025-10-31T01:21:41.952831796Z" level=info msg="CreateContainer within sandbox \"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:21:42.071583 env[1318]: time="2025-10-31T01:21:42.071521566Z" level=info msg="CreateContainer within sandbox \"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e5bfd8957a92e56f323dcb104524fd44c5e45c724c5824d4c6744451249f937\"" Oct 31 01:21:42.072128 env[1318]: time="2025-10-31T01:21:42.072091997Z" level=info msg="StartContainer for \"2e5bfd8957a92e56f323dcb104524fd44c5e45c724c5824d4c6744451249f937\"" Oct 31 01:21:42.081412 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7e0e978cbab: link becomes ready Oct 31 01:21:42.083912 systemd-networkd[1079]: cali7e0e978cbab: Link UP Oct 31 01:21:42.084069 systemd-networkd[1079]: cali7e0e978cbab: Gained carrier Oct 31 01:21:42.166606 env[1318]: time="2025-10-31T01:21:42.165209759Z" level=info msg="StartContainer for \"2e5bfd8957a92e56f323dcb104524fd44c5e45c724c5824d4c6744451249f937\" returns successfully" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.819 [INFO][4461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0 calico-apiserver-5df7bf54df- calico-apiserver f883da0a-4f39-47f1-824b-f2e94084a2d5 1101 0 2025-10-31 01:21:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df7bf54df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5df7bf54df-pqphd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e0e978cbab [] [] }} ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.819 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.845 [INFO][4495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" HandleID="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.849 [INFO][4495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" HandleID="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000342320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5df7bf54df-pqphd", "timestamp":"2025-10-31 01:21:41.845407791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.849 [INFO][4495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.871 [INFO][4495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.871 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:41.945 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.024 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.058 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.060 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.062 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.062 [INFO][4495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.063 [INFO][4495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884 Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.069 [INFO][4495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.075 [INFO][4495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.075 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" host="localhost" Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.075 [INFO][4495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:42.170619 env[1318]: 2025-10-31 01:21:42.075 [INFO][4495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" HandleID="k8s-pod-network.874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.077 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f883da0a-4f39-47f1-824b-f2e94084a2d5", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5df7bf54df-pqphd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e0e978cbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.077 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.077 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e0e978cbab ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.081 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.081 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f883da0a-4f39-47f1-824b-f2e94084a2d5", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884", Pod:"calico-apiserver-5df7bf54df-pqphd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e0e978cbab", MAC:"16:50:57:45:db:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:42.171697 env[1318]: 2025-10-31 01:21:42.168 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884" Namespace="calico-apiserver" Pod="calico-apiserver-5df7bf54df-pqphd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:42.188074 env[1318]: time="2025-10-31T01:21:42.188005384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:21:42.188256 env[1318]: time="2025-10-31T01:21:42.188232321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:21:42.188364 env[1318]: time="2025-10-31T01:21:42.188340884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:21:42.188593 env[1318]: time="2025-10-31T01:21:42.188568882Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884 pid=4605 runtime=io.containerd.runc.v2 Oct 31 01:21:42.187000 audit[4606]: NETFILTER_CFG table=filter:123 family=2 entries=57 op=nft_register_chain pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:21:42.187000 audit[4606]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffe09460220 a2=0 a3=7ffe0946020c items=0 ppid=3582 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:42.187000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:21:42.212449 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:21:42.232788 env[1318]: time="2025-10-31T01:21:42.232752561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df7bf54df-pqphd,Uid:f883da0a-4f39-47f1-824b-f2e94084a2d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884\"" Oct 31 01:21:42.234228 env[1318]: time="2025-10-31T01:21:42.234191550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:21:42.538019 env[1318]: time="2025-10-31T01:21:42.537869481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:42.583697 kubelet[2119]: E1031 01:21:42.583254 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:42.584987 kubelet[2119]: E1031 01:21:42.584943 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:21:42.606822 env[1318]: time="2025-10-31T01:21:42.606732341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:21:42.607058 kubelet[2119]: E1031 01:21:42.606962 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:42.607058 kubelet[2119]: E1031 01:21:42.607028 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:42.607170 kubelet[2119]: E1031 01:21:42.607142 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-pqphd_calico-apiserver(f883da0a-4f39-47f1-824b-f2e94084a2d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:42.608338 kubelet[2119]: E1031 01:21:42.608294 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:21:42.743220 kubelet[2119]: I1031 01:21:42.742509 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lgcnx" podStartSLOduration=45.7424932 podStartE2EDuration="45.7424932s" podCreationTimestamp="2025-10-31 01:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:21:42.74078218 +0000 UTC m=+50.566586435" watchObservedRunningTime="2025-10-31 01:21:42.7424932 +0000 UTC m=+50.568297455" Oct 31 01:21:42.753000 audit[4642]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:42.753000 audit[4642]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7cfefa20 a2=0 a3=7fff7cfefa0c items=0 ppid=2244 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:42.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:42.760000 audit[4642]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:42.760000 audit[4642]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff7cfefa20 a2=0 a3=7fff7cfefa0c items=0 ppid=2244 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:42.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:43.305628 systemd-networkd[1079]: calia437b6aa7a6: Gained IPv6LL Oct 31 01:21:43.586366 kubelet[2119]: E1031 01:21:43.586255 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:43.586999 kubelet[2119]: E1031 01:21:43.586883 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:21:43.823000 audit[4644]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:43.823000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdccfa22c0 a2=0 a3=7ffdccfa22ac items=0 ppid=2244 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:43.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:43.838000 audit[4644]: NETFILTER_CFG table=nat:127 family=2 entries=56 op=nft_register_chain pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:43.838000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdccfa22c0 a2=0 a3=7ffdccfa22ac items=0 ppid=2244 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:43.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:43.881639 systemd-networkd[1079]: cali7e0e978cbab: Gained IPv6LL Oct 31 01:21:44.096000 audit[4647]: NETFILTER_CFG table=filter:128 family=2 entries=14 op=nft_register_rule pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:44.096000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb8565350 a2=0 a3=7ffdb856533c items=0 ppid=2244 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:44.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:44.101000 audit[4647]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:21:44.101000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb8565350 a2=0 a3=7ffdb856533c items=0 ppid=2244 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:44.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:21:44.587784 kubelet[2119]: E1031 01:21:44.587758 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:21:46.178857 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:40176.service. Oct 31 01:21:46.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:40176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.180484 kernel: kauditd_printk_skb: 34 callbacks suppressed Oct 31 01:21:46.180634 kernel: audit: type=1130 audit(1761873706.177:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:40176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.212000 audit[4654]: USER_ACCT pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.214470 sshd[4654]: Accepted publickey for core from 10.0.0.1 port 40176 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:46.219795 sshd[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:46.218000 audit[4654]: CRED_ACQ pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.223931 systemd-logind[1300]: New session 11 of user core. Oct 31 01:21:46.224816 systemd[1]: Started session-11.scope. Oct 31 01:21:46.226810 kernel: audit: type=1101 audit(1761873706.212:441): pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.226867 kernel: audit: type=1103 audit(1761873706.218:442): pid=4654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.230768 kernel: audit: type=1006 audit(1761873706.218:443): pid=4654 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Oct 31 01:21:46.230813 kernel: audit: type=1300 audit(1761873706.218:443): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc04dd5820 a2=3 a3=0 items=0 ppid=1 pid=4654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:46.218000 audit[4654]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc04dd5820 a2=3 a3=0 items=0 ppid=1 pid=4654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:46.218000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:46.239466 kernel: audit: type=1327 audit(1761873706.218:443): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:46.239511 kernel: audit: type=1105 audit(1761873706.227:444): pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.227000 audit[4654]: USER_START pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.228000 audit[4657]: CRED_ACQ pid=4657 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.251904 kernel: audit: type=1103 audit(1761873706.228:445): pid=4657 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.336774 sshd[4654]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:46.339289 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:40192.service. Oct 31 01:21:46.336000 audit[4654]: USER_END pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.344940 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:40176.service: Deactivated successfully. Oct 31 01:21:46.345748 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 01:21:46.336000 audit[4654]: CRED_DISP pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.346233 systemd-logind[1300]: Session 11 logged out. Waiting for processes to exit. Oct 31 01:21:46.347033 systemd-logind[1300]: Removed session 11. Oct 31 01:21:46.352420 kernel: audit: type=1106 audit(1761873706.336:446): pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.352511 kernel: audit: type=1104 audit(1761873706.336:447): pid=4654 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.140:22-10.0.0.1:40192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.140:22-10.0.0.1:40176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.375000 audit[4669]: USER_ACCT pid=4669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.377423 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 40192 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:46.376000 audit[4669]: CRED_ACQ pid=4669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.376000 audit[4669]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc998cc20 a2=3 a3=0 items=0 ppid=1 pid=4669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:46.376000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:46.378263 sshd[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:46.381468 systemd-logind[1300]: New session 12 of user core. Oct 31 01:21:46.382262 systemd[1]: Started session-12.scope. Oct 31 01:21:46.384000 audit[4669]: USER_START pid=4669 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.385000 audit[4673]: CRED_ACQ pid=4673 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.506472 sshd[4669]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:46.506000 audit[4669]: USER_END pid=4669 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.506000 audit[4669]: CRED_DISP pid=4669 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.140:22-10.0.0.1:40198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.140:22-10.0.0.1:40192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.508623 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:40198.service. Oct 31 01:21:46.509398 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:40192.service: Deactivated successfully. Oct 31 01:21:46.510000 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 01:21:46.513740 systemd-logind[1300]: Session 12 logged out. Waiting for processes to exit. Oct 31 01:21:46.514862 systemd-logind[1300]: Removed session 12. Oct 31 01:21:46.542000 audit[4681]: USER_ACCT pid=4681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.543828 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:46.543000 audit[4681]: CRED_ACQ pid=4681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.543000 audit[4681]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1956fe60 a2=3 a3=0 items=0 ppid=1 pid=4681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:46.543000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:46.544889 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:46.549094 systemd[1]: Started session-13.scope. Oct 31 01:21:46.550027 systemd-logind[1300]: New session 13 of user core. Oct 31 01:21:46.553000 audit[4681]: USER_START pid=4681 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.554000 audit[4686]: CRED_ACQ pid=4686 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.652874 sshd[4681]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:46.652000 audit[4681]: USER_END pid=4681 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.652000 audit[4681]: CRED_DISP pid=4681 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:46.655489 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:40198.service: Deactivated successfully. Oct 31 01:21:46.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.140:22-10.0.0.1:40198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:46.656581 systemd-logind[1300]: Session 13 logged out. Waiting for processes to exit. Oct 31 01:21:46.656643 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 01:21:46.657401 systemd-logind[1300]: Removed session 13. Oct 31 01:21:50.312533 env[1318]: time="2025-10-31T01:21:50.312279626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:21:50.650376 env[1318]: time="2025-10-31T01:21:50.650248298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:50.651428 env[1318]: time="2025-10-31T01:21:50.651363243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:21:50.651711 kubelet[2119]: E1031 01:21:50.651645 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:21:50.652044 kubelet[2119]: E1031 01:21:50.651716 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:21:50.652044 kubelet[2119]: E1031 01:21:50.651851 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d59d10666e4b450bb44fb3ca0b0593f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:50.653966 env[1318]: time="2025-10-31T01:21:50.653930716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:21:50.998896 env[1318]: time="2025-10-31T01:21:50.998759379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:50.999847 env[1318]: time="2025-10-31T01:21:50.999799239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:21:51.000033 kubelet[2119]: E1031 01:21:50.999993 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:21:51.000099 kubelet[2119]: E1031 01:21:51.000045 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:21:51.000223 kubelet[2119]: E1031 01:21:51.000184 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:51.001454 kubelet[2119]: E1031 01:21:51.001407 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:21:51.656262 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:40212.service. Oct 31 01:21:51.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:40212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:51.657845 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 31 01:21:51.657923 kernel: audit: type=1130 audit(1761873711.654:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:40212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:51.684000 audit[4703]: USER_ACCT pid=4703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.685659 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 40212 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:51.690122 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:51.688000 audit[4703]: CRED_ACQ pid=4703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.694105 systemd-logind[1300]: New session 14 of user core. Oct 31 01:21:51.694739 systemd[1]: Started session-14.scope. Oct 31 01:21:51.699838 kernel: audit: type=1101 audit(1761873711.684:468): pid=4703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.699898 kernel: audit: type=1103 audit(1761873711.688:469): pid=4703 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.699919 kernel: audit: type=1006 audit(1761873711.688:470): pid=4703 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Oct 31 01:21:51.688000 audit[4703]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd292f73e0 a2=3 a3=0 items=0 ppid=1 pid=4703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:51.710645 kernel: audit: type=1300 audit(1761873711.688:470): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd292f73e0 a2=3 a3=0 items=0 ppid=1 pid=4703 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:51.710695 kernel: audit: type=1327 audit(1761873711.688:470): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:51.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:51.698000 audit[4703]: USER_START pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.720020 kernel: audit: type=1105 audit(1761873711.698:471): pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.720074 kernel: audit: type=1103 audit(1761873711.699:472): pid=4706 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.699000 audit[4706]: CRED_ACQ pid=4706 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.805271 sshd[4703]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:51.804000 audit[4703]: USER_END pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.807965 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:40212.service: Deactivated successfully. Oct 31 01:21:51.808933 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 01:21:51.810277 systemd-logind[1300]: Session 14 logged out. Waiting for processes to exit. Oct 31 01:21:51.811131 systemd-logind[1300]: Removed session 14. Oct 31 01:21:51.805000 audit[4703]: CRED_DISP pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.819629 kernel: audit: type=1106 audit(1761873711.804:473): pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.819701 kernel: audit: type=1104 audit(1761873711.805:474): pid=4703 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:51.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.140:22-10.0.0.1:40212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:52.287189 env[1318]: time="2025-10-31T01:21:52.287136676Z" level=info msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" Oct 31 01:21:52.312410 env[1318]: time="2025-10-31T01:21:52.312354805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.316 [WARNING][4729] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"06e5831d-75dc-4025-8be9-9be7b711ddfe", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b", Pod:"calico-apiserver-5df7bf54df-2pcg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid804a043560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.316 [INFO][4729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.316 [INFO][4729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" iface="eth0" netns="" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.316 [INFO][4729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.316 [INFO][4729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.338 [INFO][4739] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.338 [INFO][4739] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.338 [INFO][4739] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.343 [WARNING][4739] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.343 [INFO][4739] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.345 [INFO][4739] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.347797 env[1318]: 2025-10-31 01:21:52.346 [INFO][4729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.348406 env[1318]: time="2025-10-31T01:21:52.347829927Z" level=info msg="TearDown network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" successfully" Oct 31 01:21:52.348406 env[1318]: time="2025-10-31T01:21:52.347883150Z" level=info msg="StopPodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" returns successfully" Oct 31 01:21:52.348511 env[1318]: time="2025-10-31T01:21:52.348484691Z" level=info msg="RemovePodSandbox for \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" Oct 31 01:21:52.348563 env[1318]: time="2025-10-31T01:21:52.348522975Z" level=info msg="Forcibly stopping sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\"" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.375 [WARNING][4758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"06e5831d-75dc-4025-8be9-9be7b711ddfe", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e9af4ef8ef65ed5dada5bbac38a53fc1d18e50567769a5384ee5ea0617d0a7b", Pod:"calico-apiserver-5df7bf54df-2pcg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid804a043560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.375 [INFO][4758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.376 [INFO][4758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" iface="eth0" netns="" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.376 [INFO][4758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.376 [INFO][4758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.393 [INFO][4766] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.393 [INFO][4766] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.393 [INFO][4766] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.398 [WARNING][4766] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.398 [INFO][4766] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" HandleID="k8s-pod-network.6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Workload="localhost-k8s-calico--apiserver--5df7bf54df--2pcg2-eth0" Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.399 [INFO][4766] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.402302 env[1318]: 2025-10-31 01:21:52.400 [INFO][4758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84" Oct 31 01:21:52.402782 env[1318]: time="2025-10-31T01:21:52.402341169Z" level=info msg="TearDown network for sandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" successfully" Oct 31 01:21:52.407860 env[1318]: time="2025-10-31T01:21:52.407836515Z" level=info msg="RemovePodSandbox \"6f095c8c8a529ffa97aeb51167e6668f59f4a5d8526e2f6b2b2cd14240932c84\" returns successfully" Oct 31 01:21:52.408490 env[1318]: time="2025-10-31T01:21:52.408449267Z" level=info msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.437 [WARNING][4783] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" WorkloadEndpoint="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.437 [INFO][4783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.437 [INFO][4783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" iface="eth0" netns="" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.437 [INFO][4783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.437 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.456 [INFO][4791] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.456 [INFO][4791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.456 [INFO][4791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.461 [WARNING][4791] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.461 [INFO][4791] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.463 [INFO][4791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.465937 env[1318]: 2025-10-31 01:21:52.464 [INFO][4783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.466424 env[1318]: time="2025-10-31T01:21:52.465973854Z" level=info msg="TearDown network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" successfully" Oct 31 01:21:52.466424 env[1318]: time="2025-10-31T01:21:52.466010645Z" level=info msg="StopPodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" returns successfully" Oct 31 01:21:52.466526 env[1318]: time="2025-10-31T01:21:52.466494820Z" level=info msg="RemovePodSandbox for \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" Oct 31 01:21:52.466560 env[1318]: time="2025-10-31T01:21:52.466532513Z" level=info msg="Forcibly stopping sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\"" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.493 [WARNING][4811] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" WorkloadEndpoint="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.493 [INFO][4811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.493 [INFO][4811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" iface="eth0" netns="" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.493 [INFO][4811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.493 [INFO][4811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.510 [INFO][4820] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.510 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.510 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.517 [WARNING][4820] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.517 [INFO][4820] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" HandleID="k8s-pod-network.638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Workload="localhost-k8s-whisker--757d7d9c66--vd262-eth0" Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.519 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.521953 env[1318]: 2025-10-31 01:21:52.520 [INFO][4811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05" Oct 31 01:21:52.522378 env[1318]: time="2025-10-31T01:21:52.521981734Z" level=info msg="TearDown network for sandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" successfully" Oct 31 01:21:52.527972 env[1318]: time="2025-10-31T01:21:52.527925506Z" level=info msg="RemovePodSandbox \"638dfc04fc90b87a5acbf4f7a95afd772003452d73b8cdcc6518078d6a6c7c05\" returns successfully" Oct 31 01:21:52.528519 env[1318]: time="2025-10-31T01:21:52.528475196Z" level=info msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.558 [WARNING][4838] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30ef7351-e113-44f3-84eb-f1e0f60f06cf", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0", Pod:"coredns-668d6bf9bc-rnsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f06b77d8c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.558 [INFO][4838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.558 [INFO][4838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" iface="eth0" netns="" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.558 [INFO][4838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.558 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.574 [INFO][4846] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.574 [INFO][4846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.575 [INFO][4846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.579 [WARNING][4846] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.579 [INFO][4846] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.580 [INFO][4846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.583135 env[1318]: 2025-10-31 01:21:52.581 [INFO][4838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.583630 env[1318]: time="2025-10-31T01:21:52.583192325Z" level=info msg="TearDown network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" successfully" Oct 31 01:21:52.583630 env[1318]: time="2025-10-31T01:21:52.583224627Z" level=info msg="StopPodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" returns successfully" Oct 31 01:21:52.584017 env[1318]: time="2025-10-31T01:21:52.583971178Z" level=info msg="RemovePodSandbox for \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" Oct 31 01:21:52.584208 env[1318]: time="2025-10-31T01:21:52.584012708Z" level=info msg="Forcibly stopping sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\"" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.612 [WARNING][4863] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30ef7351-e113-44f3-84eb-f1e0f60f06cf", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3ef13b003d38069f41b9a7b8c1ebc09808330066a34a4f92cd69d31c2b963c0", Pod:"coredns-668d6bf9bc-rnsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f06b77d8c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.612 [INFO][4863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.612 [INFO][4863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" iface="eth0" netns="" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.612 [INFO][4863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.612 [INFO][4863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.627 [INFO][4871] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.627 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.628 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.633 [WARNING][4871] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.633 [INFO][4871] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" HandleID="k8s-pod-network.38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Workload="localhost-k8s-coredns--668d6bf9bc--rnsbn-eth0" Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.635 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.638402 env[1318]: 2025-10-31 01:21:52.636 [INFO][4863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9" Oct 31 01:21:52.638868 env[1318]: time="2025-10-31T01:21:52.638437062Z" level=info msg="TearDown network for sandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" successfully" Oct 31 01:21:52.641886 env[1318]: time="2025-10-31T01:21:52.641852033Z" level=info msg="RemovePodSandbox \"38895ed327e18ea6a4e48cb626307d6f5d57c6fedbea2e93208324b779d4ebb9\" returns successfully" Oct 31 01:21:52.642371 env[1318]: time="2025-10-31T01:21:52.642333662Z" level=info msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" Oct 31 01:21:52.659617 env[1318]: time="2025-10-31T01:21:52.659566460Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:52.660558 env[1318]: time="2025-10-31T01:21:52.660519539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:21:52.660784 kubelet[2119]: E1031 01:21:52.660745 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:21:52.661068 kubelet[2119]: E1031 01:21:52.660797 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:21:52.661068 kubelet[2119]: E1031 01:21:52.660942 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vzlbq_calico-system(7147f3bc-4883-48d8-85dc-189c66dbfbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:52.662351 kubelet[2119]: E1031 01:21:52.662324 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.671 [WARNING][4890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vzlbq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7147f3bc-4883-48d8-85dc-189c66dbfbd3", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392", Pod:"goldmane-666569f655-vzlbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05797aed71b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.671 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.671 [INFO][4890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" iface="eth0" netns="" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.671 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.671 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.694 [INFO][4898] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.694 [INFO][4898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.694 [INFO][4898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.701 [WARNING][4898] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.701 [INFO][4898] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.702 [INFO][4898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.705020 env[1318]: 2025-10-31 01:21:52.703 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.705515 env[1318]: time="2025-10-31T01:21:52.705050069Z" level=info msg="TearDown network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" successfully" Oct 31 01:21:52.705515 env[1318]: time="2025-10-31T01:21:52.705085578Z" level=info msg="StopPodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" returns successfully" Oct 31 01:21:52.705575 env[1318]: time="2025-10-31T01:21:52.705543181Z" level=info msg="RemovePodSandbox for \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" Oct 31 01:21:52.705618 env[1318]: time="2025-10-31T01:21:52.705581735Z" level=info msg="Forcibly stopping sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\"" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.733 [WARNING][4915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vzlbq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7147f3bc-4883-48d8-85dc-189c66dbfbd3", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fb52bce3a7fe46f2b19898786485bfe1f80fcb9ed2e1eff4a0ccd6c53e5a392", Pod:"goldmane-666569f655-vzlbq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05797aed71b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.733 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.733 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" iface="eth0" netns="" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.733 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.733 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.750 [INFO][4924] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.750 [INFO][4924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.750 [INFO][4924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.755 [WARNING][4924] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.755 [INFO][4924] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" HandleID="k8s-pod-network.4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Workload="localhost-k8s-goldmane--666569f655--vzlbq-eth0" Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.756 [INFO][4924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.759115 env[1318]: 2025-10-31 01:21:52.757 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002" Oct 31 01:21:52.759629 env[1318]: time="2025-10-31T01:21:52.759574896Z" level=info msg="TearDown network for sandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" successfully" Oct 31 01:21:52.762771 env[1318]: time="2025-10-31T01:21:52.762746448Z" level=info msg="RemovePodSandbox \"4ea89c5fca02b92f57d37e180bab49890d335dfd85f488a6df0e93319d4cb002\" returns successfully" Oct 31 01:21:52.763279 env[1318]: time="2025-10-31T01:21:52.763235821Z" level=info msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.790 [WARNING][4941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03e81ffc-7bc8-4496-a870-f2e322aeb1d9", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d", Pod:"coredns-668d6bf9bc-lgcnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia437b6aa7a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.790 [INFO][4941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.790 [INFO][4941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" iface="eth0" netns="" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.790 [INFO][4941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.790 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.812 [INFO][4951] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.812 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.812 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.817 [WARNING][4951] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.817 [INFO][4951] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.819 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.822669 env[1318]: 2025-10-31 01:21:52.820 [INFO][4941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.823263 env[1318]: time="2025-10-31T01:21:52.823204726Z" level=info msg="TearDown network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" successfully" Oct 31 01:21:52.823263 env[1318]: time="2025-10-31T01:21:52.823241467Z" level=info msg="StopPodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" returns successfully" Oct 31 01:21:52.823865 env[1318]: time="2025-10-31T01:21:52.823816647Z" level=info msg="RemovePodSandbox for \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" Oct 31 01:21:52.823925 env[1318]: time="2025-10-31T01:21:52.823873065Z" level=info msg="Forcibly stopping sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\"" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.855 [WARNING][4969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03e81ffc-7bc8-4496-a870-f2e322aeb1d9", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 20, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92ab119e6853c020b1b9f9de47ab2874ef3ab47a1eef24f44351fb23c5df069d", Pod:"coredns-668d6bf9bc-lgcnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia437b6aa7a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.855 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.855 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" iface="eth0" netns="" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.855 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.855 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.872 [INFO][4978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.872 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.872 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.878 [WARNING][4978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.878 [INFO][4978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" HandleID="k8s-pod-network.71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Workload="localhost-k8s-coredns--668d6bf9bc--lgcnx-eth0" Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.879 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.882073 env[1318]: 2025-10-31 01:21:52.880 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89" Oct 31 01:21:52.882073 env[1318]: time="2025-10-31T01:21:52.882036536Z" level=info msg="TearDown network for sandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" successfully" Oct 31 01:21:52.886744 env[1318]: time="2025-10-31T01:21:52.886708181Z" level=info msg="RemovePodSandbox \"71e43596258043e393d6be6f144189ed20ad0adf0ab387ac2b709f1b78e37b89\" returns successfully" Oct 31 01:21:52.887213 env[1318]: time="2025-10-31T01:21:52.887189461Z" level=info msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.918 [WARNING][4995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0", GenerateName:"calico-kube-controllers-86b466566-", Namespace:"calico-system", SelfLink:"", UID:"aa2fbf03-d734-4df0-9482-3da8a7ab55e1", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b466566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca", Pod:"calico-kube-controllers-86b466566-mfnxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3803d71bd03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.919 [INFO][4995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.919 [INFO][4995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" iface="eth0" netns="" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.919 [INFO][4995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.919 [INFO][4995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.936 [INFO][5003] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.936 [INFO][5003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.936 [INFO][5003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.942 [WARNING][5003] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.942 [INFO][5003] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.943 [INFO][5003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:52.947421 env[1318]: 2025-10-31 01:21:52.945 [INFO][4995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:52.947893 env[1318]: time="2025-10-31T01:21:52.947443755Z" level=info msg="TearDown network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" successfully" Oct 31 01:21:52.947893 env[1318]: time="2025-10-31T01:21:52.947475116Z" level=info msg="StopPodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" returns successfully" Oct 31 01:21:52.948041 env[1318]: time="2025-10-31T01:21:52.947981063Z" level=info msg="RemovePodSandbox for \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" Oct 31 01:21:52.948085 env[1318]: time="2025-10-31T01:21:52.948031190Z" level=info msg="Forcibly stopping sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\"" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.977 [WARNING][5021] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0", GenerateName:"calico-kube-controllers-86b466566-", Namespace:"calico-system", SelfLink:"", UID:"aa2fbf03-d734-4df0-9482-3da8a7ab55e1", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b466566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e244e67677190defa6b39cadc3446e2bf3d6ace61703ea72254c83939379ca", Pod:"calico-kube-controllers-86b466566-mfnxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3803d71bd03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.977 [INFO][5021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.977 [INFO][5021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" iface="eth0" netns="" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.977 [INFO][5021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.977 [INFO][5021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.993 [INFO][5030] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.993 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.994 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.998 [WARNING][5030] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.998 [INFO][5030] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" HandleID="k8s-pod-network.69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Workload="localhost-k8s-calico--kube--controllers--86b466566--mfnxs-eth0" Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:52.999 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:53.002858 env[1318]: 2025-10-31 01:21:53.000 [INFO][5021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a" Oct 31 01:21:53.003769 env[1318]: time="2025-10-31T01:21:53.002878597Z" level=info msg="TearDown network for sandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" successfully" Oct 31 01:21:53.006325 env[1318]: time="2025-10-31T01:21:53.006283934Z" level=info msg="RemovePodSandbox \"69029a5c9e7b6ff5334e170afd3d06d6ffdc28f18a338beb6255c1b4a366603a\" returns successfully" Oct 31 01:21:53.006841 env[1318]: time="2025-10-31T01:21:53.006795831Z" level=info msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.035 [WARNING][5049] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f883da0a-4f39-47f1-824b-f2e94084a2d5", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884", Pod:"calico-apiserver-5df7bf54df-pqphd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e0e978cbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.035 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.035 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" iface="eth0" netns="" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.035 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.035 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.055 [INFO][5058] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.055 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.055 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.060 [WARNING][5058] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.060 [INFO][5058] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.062 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:53.064979 env[1318]: 2025-10-31 01:21:53.063 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.065460 env[1318]: time="2025-10-31T01:21:53.065000114Z" level=info msg="TearDown network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" successfully" Oct 31 01:21:53.065460 env[1318]: time="2025-10-31T01:21:53.065045261Z" level=info msg="StopPodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" returns successfully" Oct 31 01:21:53.065586 env[1318]: time="2025-10-31T01:21:53.065552499Z" level=info msg="RemovePodSandbox for \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" Oct 31 01:21:53.065652 env[1318]: time="2025-10-31T01:21:53.065590542Z" level=info msg="Forcibly stopping sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\"" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.095 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0", GenerateName:"calico-apiserver-5df7bf54df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f883da0a-4f39-47f1-824b-f2e94084a2d5", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df7bf54df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"874cfe2832a26cb2c96ee431cf8f0399e56fd8acf86b6ff08becfd3aa7cfe884", Pod:"calico-apiserver-5df7bf54df-pqphd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e0e978cbab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.096 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.096 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" iface="eth0" netns="" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.096 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.096 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.112 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.113 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.113 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.117 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.117 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" HandleID="k8s-pod-network.d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Workload="localhost-k8s-calico--apiserver--5df7bf54df--pqphd-eth0" Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.119 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:53.122325 env[1318]: 2025-10-31 01:21:53.120 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8" Oct 31 01:21:53.122782 env[1318]: time="2025-10-31T01:21:53.122353106Z" level=info msg="TearDown network for sandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" successfully" Oct 31 01:21:53.125888 env[1318]: time="2025-10-31T01:21:53.125830180Z" level=info msg="RemovePodSandbox \"d1cb826e74c7250681f904a9f2e5cbdc8929c235c69336657d604ebd4dc855f8\" returns successfully" Oct 31 01:21:53.126411 env[1318]: time="2025-10-31T01:21:53.126362868Z" level=info msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.154 [WARNING][5104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9l4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef33ba9-4950-4b3a-9079-7b7964e46235", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57", Pod:"csi-node-driver-b9l4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a6cbb2f06e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.154 [INFO][5104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.154 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" iface="eth0" netns="" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.154 [INFO][5104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.154 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.172 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.172 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.172 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.177 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.177 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.179 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:53.182717 env[1318]: 2025-10-31 01:21:53.181 [INFO][5104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.182717 env[1318]: time="2025-10-31T01:21:53.182677978Z" level=info msg="TearDown network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" successfully" Oct 31 01:21:53.182717 env[1318]: time="2025-10-31T01:21:53.182712264Z" level=info msg="StopPodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" returns successfully" Oct 31 01:21:53.183326 env[1318]: time="2025-10-31T01:21:53.183294106Z" level=info msg="RemovePodSandbox for \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" Oct 31 01:21:53.183367 env[1318]: time="2025-10-31T01:21:53.183332861Z" level=info msg="Forcibly stopping sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\"" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.211 [WARNING][5131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9l4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9ef33ba9-4950-4b3a-9079-7b7964e46235", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ff7d424decb9f18eca82fe0b4a4867730cc0701fa25b070bdbe89a09bc13f57", Pod:"csi-node-driver-b9l4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0a6cbb2f06e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.211 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.211 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" iface="eth0" netns="" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.211 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.211 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.229 [INFO][5141] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.229 [INFO][5141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.229 [INFO][5141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.234 [WARNING][5141] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.234 [INFO][5141] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" HandleID="k8s-pod-network.c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Workload="localhost-k8s-csi--node--driver--b9l4v-eth0" Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.235 [INFO][5141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:21:53.239112 env[1318]: 2025-10-31 01:21:53.237 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2" Oct 31 01:21:53.239642 env[1318]: time="2025-10-31T01:21:53.239133568Z" level=info msg="TearDown network for sandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" successfully" Oct 31 01:21:53.247156 env[1318]: time="2025-10-31T01:21:53.247125370Z" level=info msg="RemovePodSandbox \"c7f8a4dd868f6e5212de152f6cfa18a1fd323ed5836e29ff1b201401953fc4c2\" returns successfully" Oct 31 01:21:53.311671 env[1318]: time="2025-10-31T01:21:53.311608842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:21:53.647820 env[1318]: time="2025-10-31T01:21:53.647768443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:53.648982 env[1318]: time="2025-10-31T01:21:53.648921165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:21:53.649159 kubelet[2119]: E1031 01:21:53.649086 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:21:53.649159 kubelet[2119]: E1031 01:21:53.649125 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:21:53.649339 kubelet[2119]: E1031 01:21:53.649292 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:53.649466 env[1318]: time="2025-10-31T01:21:53.649419215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:21:53.990033 env[1318]: time="2025-10-31T01:21:53.989885372Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:53.991233 env[1318]: time="2025-10-31T01:21:53.991162183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:21:53.991481 kubelet[2119]: E1031 01:21:53.991432 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:21:53.991738 kubelet[2119]: E1031 01:21:53.991489 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:21:53.991797 kubelet[2119]: E1031 01:21:53.991742 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-784c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b466566-mfnxs_calico-system(aa2fbf03-d734-4df0-9482-3da8a7ab55e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:53.991895 env[1318]: time="2025-10-31T01:21:53.991855040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:21:53.993458 kubelet[2119]: E1031 01:21:53.993416 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:21:54.327043 env[1318]: time="2025-10-31T01:21:54.327000447Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:54.327995 env[1318]: time="2025-10-31T01:21:54.327959015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:21:54.328416 kubelet[2119]: E1031 01:21:54.328325 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:21:54.328570 kubelet[2119]: E1031 01:21:54.328418 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:21:54.328570 kubelet[2119]: E1031 01:21:54.328524 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:54.329911 kubelet[2119]: E1031 01:21:54.329880 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:21:56.311998 env[1318]: time="2025-10-31T01:21:56.311945979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:21:56.648105 env[1318]: time="2025-10-31T01:21:56.647967628Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:56.649339 env[1318]: time="2025-10-31T01:21:56.649292087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:21:56.649528 kubelet[2119]: E1031 01:21:56.649484 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:56.649893 kubelet[2119]: E1031 01:21:56.649538 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:56.649893 kubelet[2119]: E1031 01:21:56.649675 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xllgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-2pcg2_calico-apiserver(06e5831d-75dc-4025-8be9-9be7b711ddfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:56.650861 kubelet[2119]: E1031 01:21:56.650827 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:21:56.809041 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:33248.service. Oct 31 01:21:56.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:56.811254 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:21:56.811435 kernel: audit: type=1130 audit(1761873716.808:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:56.841000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.842367 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 33248 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:21:56.844921 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:21:56.844000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.848952 systemd-logind[1300]: New session 15 of user core. Oct 31 01:21:56.849886 systemd[1]: Started session-15.scope. Oct 31 01:21:56.854921 kernel: audit: type=1101 audit(1761873716.841:477): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.855003 kernel: audit: type=1103 audit(1761873716.844:478): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.859014 kernel: audit: type=1006 audit(1761873716.844:479): pid=5155 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 31 01:21:56.859069 kernel: audit: type=1300 audit(1761873716.844:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc02329eb0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:56.844000 audit[5155]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc02329eb0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:21:56.865563 kernel: audit: type=1327 audit(1761873716.844:479): proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:56.844000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:21:56.867754 kernel: audit: type=1105 audit(1761873716.854:480): pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.854000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.874805 kernel: audit: type=1103 audit(1761873716.855:481): pid=5158 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.855000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.960845 sshd[5155]: pam_unix(sshd:session): session closed for user core Oct 31 01:21:56.961000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.963424 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:33248.service: Deactivated successfully. Oct 31 01:21:56.964462 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 01:21:56.967748 systemd-logind[1300]: Session 15 logged out. Waiting for processes to exit. Oct 31 01:21:56.968635 systemd-logind[1300]: Removed session 15. Oct 31 01:21:56.961000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.975159 kernel: audit: type=1106 audit(1761873716.961:482): pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.975219 kernel: audit: type=1104 audit(1761873716.961:483): pid=5155 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:21:56.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.140:22-10.0.0.1:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:21:57.311598 env[1318]: time="2025-10-31T01:21:57.311548678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:21:57.635024 env[1318]: time="2025-10-31T01:21:57.634885969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:21:57.636056 env[1318]: time="2025-10-31T01:21:57.636004359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:21:57.636247 kubelet[2119]: E1031 01:21:57.636203 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:57.636352 kubelet[2119]: E1031 01:21:57.636261 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:21:57.636466 kubelet[2119]: E1031 01:21:57.636430 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-pqphd_calico-apiserver(f883da0a-4f39-47f1-824b-f2e94084a2d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:21:57.637930 kubelet[2119]: E1031 01:21:57.637888 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:22:01.964234 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:33264.service. Oct 31 01:22:01.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:01.965866 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:22:01.965929 kernel: audit: type=1130 audit(1761873721.962:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:01.993000 audit[5171]: USER_ACCT pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:01.994666 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 33264 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:02.026000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.027981 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:02.031819 systemd-logind[1300]: New session 16 of user core. Oct 31 01:22:02.032788 systemd[1]: Started session-16.scope. Oct 31 01:22:02.034117 kernel: audit: type=1101 audit(1761873721.993:486): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.034261 kernel: audit: type=1103 audit(1761873722.026:487): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.038285 kernel: audit: type=1006 audit(1761873722.026:488): pid=5171 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 31 01:22:02.026000 audit[5171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0eea6ef0 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:02.045410 kernel: audit: type=1300 audit(1761873722.026:488): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0eea6ef0 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:02.045481 kernel: audit: type=1327 audit(1761873722.026:488): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:02.026000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:02.047756 kernel: audit: type=1105 audit(1761873722.037:489): pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.037000 audit[5171]: USER_START pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.038000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.060679 kernel: audit: type=1103 audit(1761873722.038:490): pid=5174 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.170536 sshd[5171]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:02.170000 audit[5171]: USER_END pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.172894 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:33264.service: Deactivated successfully. Oct 31 01:22:02.173702 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 01:22:02.174761 systemd-logind[1300]: Session 16 logged out. Waiting for processes to exit. Oct 31 01:22:02.175799 systemd-logind[1300]: Removed session 16. Oct 31 01:22:02.170000 audit[5171]: CRED_DISP pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.184798 kernel: audit: type=1106 audit(1761873722.170:491): pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.184862 kernel: audit: type=1104 audit(1761873722.170:492): pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:02.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.140:22-10.0.0.1:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:03.311528 kubelet[2119]: E1031 01:22:03.311481 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:22:05.581466 systemd[1]: run-containerd-runc-k8s.io-04a929cd8ef9272fa81c1dbb60325ad7a3f1ea4b2848cfc93a7baf794c844df8-runc.gVltBV.mount: Deactivated successfully. Oct 31 01:22:05.634198 kubelet[2119]: E1031 01:22:05.634160 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:22:06.311035 kubelet[2119]: E1031 01:22:06.310958 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:22:07.173395 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:41324.service. Oct 31 01:22:07.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:41324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.174993 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:22:07.175059 kernel: audit: type=1130 audit(1761873727.172:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:41324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.202000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.203843 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 41324 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:07.205791 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:07.210795 kernel: audit: type=1101 audit(1761873727.202:495): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.210854 kernel: audit: type=1103 audit(1761873727.204:496): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.204000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.210172 systemd[1]: Started session-17.scope. Oct 31 01:22:07.210394 systemd-logind[1300]: New session 17 of user core. Oct 31 01:22:07.220329 kernel: audit: type=1006 audit(1761873727.204:497): pid=5210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 31 01:22:07.220423 kernel: audit: type=1300 audit(1761873727.204:497): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeff52fcc0 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:07.204000 audit[5210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeff52fcc0 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:07.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:07.229244 kernel: audit: type=1327 audit(1761873727.204:497): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:07.229287 kernel: audit: type=1105 audit(1761873727.214:498): pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.214000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.215000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.242220 kernel: audit: type=1103 audit(1761873727.215:499): pid=5213 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.308634 sshd[5210]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:07.308000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.310690 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:41338.service. Oct 31 01:22:07.311093 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:41324.service: Deactivated successfully. Oct 31 01:22:07.312111 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 01:22:07.312362 kubelet[2119]: E1031 01:22:07.312315 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:22:07.308000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.317761 systemd-logind[1300]: Session 17 logged out. Waiting for processes to exit. Oct 31 01:22:07.319495 systemd-logind[1300]: Removed session 17. Oct 31 01:22:07.323955 kernel: audit: type=1106 audit(1761873727.308:500): pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.324078 kernel: audit: type=1104 audit(1761873727.308:501): pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.140:22-10.0.0.1:41338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.140:22-10.0.0.1:41324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.349000 audit[5222]: USER_ACCT pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.350731 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 41338 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:07.349000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.349000 audit[5222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5f9aaf0 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:07.349000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:07.351583 sshd[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:07.354580 systemd-logind[1300]: New session 18 of user core. Oct 31 01:22:07.355293 systemd[1]: Started session-18.scope. Oct 31 01:22:07.358000 audit[5222]: USER_START pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.359000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.518447 sshd[5222]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:07.518000 audit[5222]: USER_END pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.519000 audit[5222]: CRED_DISP pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.520503 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:41348.service. Oct 31 01:22:07.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.140:22-10.0.0.1:41348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.522272 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:41338.service: Deactivated successfully. Oct 31 01:22:07.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.140:22-10.0.0.1:41338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:07.523325 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 01:22:07.523715 systemd-logind[1300]: Session 18 logged out. Waiting for processes to exit. Oct 31 01:22:07.524552 systemd-logind[1300]: Removed session 18. Oct 31 01:22:07.548000 audit[5235]: USER_ACCT pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.549753 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 41348 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:07.549000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.549000 audit[5235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec3beed90 a2=3 a3=0 items=0 ppid=1 pid=5235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:07.549000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:07.550848 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:07.553952 systemd-logind[1300]: New session 19 of user core. Oct 31 01:22:07.554676 systemd[1]: Started session-19.scope. Oct 31 01:22:07.557000 audit[5235]: USER_START pid=5235 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:07.558000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.088000 audit[5252]: NETFILTER_CFG table=filter:130 family=2 entries=26 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:08.088000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3350a870 a2=0 a3=7fff3350a85c items=0 ppid=2244 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:08.100253 sshd[5235]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:08.100000 audit[5252]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:08.102361 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:41364.service. Oct 31 01:22:08.101000 audit[5235]: USER_END pid=5235 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.101000 audit[5235]: CRED_DISP pid=5235 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.140:22-10.0.0.1:41364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:08.105659 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:41348.service: Deactivated successfully. Oct 31 01:22:08.106271 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 01:22:08.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.140:22-10.0.0.1:41348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:08.107107 systemd-logind[1300]: Session 19 logged out. Waiting for processes to exit. Oct 31 01:22:08.107788 systemd-logind[1300]: Removed session 19. Oct 31 01:22:08.100000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3350a870 a2=0 a3=0 items=0 ppid=2244 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:08.166878 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 41364 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:08.167992 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:08.165000 audit[5253]: USER_ACCT pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.166000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.166000 audit[5253]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc5cfb90 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.166000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:08.168000 audit[5258]: NETFILTER_CFG table=filter:132 family=2 entries=38 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:08.168000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee390f1c0 a2=0 a3=7ffee390f1ac items=0 ppid=2244 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:08.172445 systemd[1]: Started session-20.scope. Oct 31 01:22:08.173274 systemd-logind[1300]: New session 20 of user core. Oct 31 01:22:08.173000 audit[5258]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:08.173000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee390f1c0 a2=0 a3=0 items=0 ppid=2244 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:08.176000 audit[5253]: USER_START pid=5253 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.177000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.312175 kubelet[2119]: E1031 01:22:08.312128 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:22:08.399000 audit[5253]: USER_END pid=5253 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.399000 audit[5253]: CRED_DISP pid=5253 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.140:22-10.0.0.1:41368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:08.399796 sshd[5253]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:08.402109 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:41368.service. Oct 31 01:22:08.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.140:22-10.0.0.1:41364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:08.403321 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:41364.service: Deactivated successfully. Oct 31 01:22:08.404440 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 01:22:08.404693 systemd-logind[1300]: Session 20 logged out. Waiting for processes to exit. Oct 31 01:22:08.406343 systemd-logind[1300]: Removed session 20. Oct 31 01:22:08.433000 audit[5268]: USER_ACCT pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.435488 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 41368 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:08.434000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.434000 audit[5268]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff46b69b50 a2=3 a3=0 items=0 ppid=1 pid=5268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:08.434000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:08.436556 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:08.440123 systemd-logind[1300]: New session 21 of user core. Oct 31 01:22:08.440832 systemd[1]: Started session-21.scope. Oct 31 01:22:08.444000 audit[5268]: USER_START pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.445000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.550175 sshd[5268]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:08.549000 audit[5268]: USER_END pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.549000 audit[5268]: CRED_DISP pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:08.552676 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:41368.service: Deactivated successfully. Oct 31 01:22:08.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.140:22-10.0.0.1:41368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:08.553908 systemd-logind[1300]: Session 21 logged out. Waiting for processes to exit. Oct 31 01:22:08.553959 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 01:22:08.554722 systemd-logind[1300]: Removed session 21. Oct 31 01:22:11.311637 kubelet[2119]: E1031 01:22:11.311594 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:22:11.312070 kubelet[2119]: E1031 01:22:11.311679 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:22:13.310915 kubelet[2119]: E1031 01:22:13.310865 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:22:13.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:41380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:13.553355 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:41380.service. Oct 31 01:22:13.555239 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 31 01:22:13.555307 kernel: audit: type=1130 audit(1761873733.552:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:41380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:13.581000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.583254 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 41380 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:13.584781 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:13.589208 systemd[1]: Started session-22.scope. Oct 31 01:22:13.589559 systemd-logind[1300]: New session 22 of user core. Oct 31 01:22:13.583000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.599150 kernel: audit: type=1101 audit(1761873733.581:544): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.599244 kernel: audit: type=1103 audit(1761873733.583:545): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.599270 kernel: audit: type=1006 audit(1761873733.583:546): pid=5284 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Oct 31 01:22:13.583000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf472b4e0 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:13.611221 kernel: audit: type=1300 audit(1761873733.583:546): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf472b4e0 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:13.611262 kernel: audit: type=1327 audit(1761873733.583:546): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:13.583000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:13.594000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.621898 kernel: audit: type=1105 audit(1761873733.594:547): pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.621938 kernel: audit: type=1103 audit(1761873733.595:548): pid=5287 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.595000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.689690 sshd[5284]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:13.689000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.691624 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:41380.service: Deactivated successfully. Oct 31 01:22:13.692785 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 01:22:13.692834 systemd-logind[1300]: Session 22 logged out. Waiting for processes to exit. Oct 31 01:22:13.693792 systemd-logind[1300]: Removed session 22. Oct 31 01:22:13.689000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.705570 kernel: audit: type=1106 audit(1761873733.689:549): pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.705614 kernel: audit: type=1104 audit(1761873733.689:550): pid=5284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:13.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.140:22-10.0.0.1:41380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:13.940000 audit[5300]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=5300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:13.940000 audit[5300]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd28a79870 a2=0 a3=7ffd28a7985c items=0 ppid=2244 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:13.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:13.945000 audit[5300]: NETFILTER_CFG table=nat:135 family=2 entries=104 op=nft_register_chain pid=5300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:22:13.945000 audit[5300]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd28a79870 a2=0 a3=7ffd28a7985c items=0 ppid=2244 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:13.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:22:14.311869 env[1318]: time="2025-10-31T01:22:14.311624363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:22:14.689877 env[1318]: time="2025-10-31T01:22:14.689736016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:14.696051 env[1318]: time="2025-10-31T01:22:14.695984577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:22:14.696262 kubelet[2119]: E1031 01:22:14.696217 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:22:14.696593 kubelet[2119]: E1031 01:22:14.696274 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:22:14.696593 kubelet[2119]: E1031 01:22:14.696395 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d59d10666e4b450bb44fb3ca0b0593f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:14.698634 env[1318]: time="2025-10-31T01:22:14.698597336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:22:15.005565 env[1318]: time="2025-10-31T01:22:15.005396561Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:15.006534 env[1318]: time="2025-10-31T01:22:15.006486818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:22:15.006830 kubelet[2119]: E1031 01:22:15.006774 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:22:15.006909 kubelet[2119]: E1031 01:22:15.006842 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:22:15.007001 kubelet[2119]: E1031 01:22:15.006965 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7fxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-666d989cd4-28np7_calico-system(c73f2cd7-5e10-439e-b9c8-8be3e29282cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:15.008185 kubelet[2119]: E1031 01:22:15.008139 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:22:18.698846 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:38300.service. Oct 31 01:22:18.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:38300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:18.700869 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 01:22:18.700938 kernel: audit: type=1130 audit(1761873738.697:554): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:38300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:18.752000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.753803 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 38300 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:18.756298 sshd[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:18.754000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.764187 systemd-logind[1300]: New session 23 of user core. Oct 31 01:22:18.765893 systemd[1]: Started session-23.scope. Oct 31 01:22:18.770128 kernel: audit: type=1101 audit(1761873738.752:555): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.770206 kernel: audit: type=1103 audit(1761873738.754:556): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.774998 kernel: audit: type=1006 audit(1761873738.754:557): pid=5307 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Oct 31 01:22:18.775096 kernel: audit: type=1300 audit(1761873738.754:557): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6caa890 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:18.754000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6caa890 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:18.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:18.788274 kernel: audit: type=1327 audit(1761873738.754:557): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:18.788462 kernel: audit: type=1105 audit(1761873738.774:558): pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.774000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.777000 audit[5310]: CRED_ACQ pid=5310 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.805009 kernel: audit: type=1103 audit(1761873738.777:559): pid=5310 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.926655 sshd[5307]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:18.926000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.929878 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:38300.service: Deactivated successfully. Oct 31 01:22:18.931069 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 01:22:18.932562 systemd-logind[1300]: Session 23 logged out. Waiting for processes to exit. Oct 31 01:22:18.933906 systemd-logind[1300]: Removed session 23. Oct 31 01:22:18.926000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.944133 kernel: audit: type=1106 audit(1761873738.926:560): pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.944336 kernel: audit: type=1104 audit(1761873738.926:561): pid=5307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:18.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.140:22-10.0.0.1:38300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:21.311690 env[1318]: time="2025-10-31T01:22:21.311652768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:22:21.638028 env[1318]: time="2025-10-31T01:22:21.637859774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:21.639222 env[1318]: time="2025-10-31T01:22:21.639150699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:22:21.639488 kubelet[2119]: E1031 01:22:21.639409 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:22:21.639844 kubelet[2119]: E1031 01:22:21.639489 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:22:21.639844 kubelet[2119]: E1031 01:22:21.639646 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vzlbq_calico-system(7147f3bc-4883-48d8-85dc-189c66dbfbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:21.640903 kubelet[2119]: E1031 01:22:21.640852 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vzlbq" podUID="7147f3bc-4883-48d8-85dc-189c66dbfbd3" Oct 31 01:22:22.311358 env[1318]: time="2025-10-31T01:22:22.311302642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:22:22.629727 env[1318]: time="2025-10-31T01:22:22.629569213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:22.630749 env[1318]: time="2025-10-31T01:22:22.630700794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:22:22.630960 kubelet[2119]: E1031 01:22:22.630899 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:22:22.631055 kubelet[2119]: E1031 01:22:22.630971 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:22:22.631284 kubelet[2119]: E1031 01:22:22.631222 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:22.631436 env[1318]: time="2025-10-31T01:22:22.631287639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:22:22.945370 env[1318]: time="2025-10-31T01:22:22.945186458Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:22.948124 env[1318]: time="2025-10-31T01:22:22.948036446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:22:22.948447 kubelet[2119]: E1031 01:22:22.948344 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:22:22.948841 kubelet[2119]: E1031 01:22:22.948696 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:22:22.949668 env[1318]: time="2025-10-31T01:22:22.949271223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:22:22.949859 kubelet[2119]: E1031 01:22:22.949294 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-784c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86b466566-mfnxs_calico-system(aa2fbf03-d734-4df0-9482-3da8a7ab55e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:22.951825 kubelet[2119]: E1031 01:22:22.950693 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86b466566-mfnxs" podUID="aa2fbf03-d734-4df0-9482-3da8a7ab55e1" Oct 31 01:22:23.298141 env[1318]: time="2025-10-31T01:22:23.297973330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:23.299359 env[1318]: time="2025-10-31T01:22:23.299273962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:22:23.299646 kubelet[2119]: E1031 01:22:23.299596 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:22:23.299718 kubelet[2119]: E1031 01:22:23.299659 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:22:23.300139 env[1318]: time="2025-10-31T01:22:23.299963913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:22:23.300192 kubelet[2119]: E1031 01:22:23.299975 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-pqphd_calico-apiserver(f883da0a-4f39-47f1-824b-f2e94084a2d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:23.301197 kubelet[2119]: E1031 01:22:23.301160 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-pqphd" podUID="f883da0a-4f39-47f1-824b-f2e94084a2d5" Oct 31 01:22:23.310398 kubelet[2119]: E1031 01:22:23.310341 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:22:23.660572 env[1318]: time="2025-10-31T01:22:23.660501003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:23.661693 env[1318]: time="2025-10-31T01:22:23.661649495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:22:23.661938 kubelet[2119]: E1031 01:22:23.661888 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:22:23.662012 kubelet[2119]: E1031 01:22:23.661949 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:22:23.662160 kubelet[2119]: E1031 01:22:23.662097 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r2s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-b9l4v_calico-system(9ef33ba9-4950-4b3a-9079-7b7964e46235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:23.663300 kubelet[2119]: E1031 01:22:23.663262 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-b9l4v" podUID="9ef33ba9-4950-4b3a-9079-7b7964e46235" Oct 31 01:22:23.931907 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:22:23.932043 kernel: audit: type=1130 audit(1761873743.929:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:23.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:23.929849 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:38308.service. Oct 31 01:22:23.961000 audit[5323]: USER_ACCT pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.961846 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:23.968000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.969043 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:23.969439 kernel: audit: type=1101 audit(1761873743.961:564): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.969557 kernel: audit: type=1103 audit(1761873743.968:565): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.972863 systemd-logind[1300]: New session 24 of user core. Oct 31 01:22:23.973988 systemd[1]: Started session-24.scope. Oct 31 01:22:23.978659 kernel: audit: type=1006 audit(1761873743.968:566): pid=5323 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 31 01:22:23.968000 audit[5323]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3de63910 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:23.985407 kernel: audit: type=1300 audit(1761873743.968:566): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3de63910 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:23.985479 kernel: audit: type=1327 audit(1761873743.968:566): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:23.968000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:23.978000 audit[5323]: USER_START pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.994803 kernel: audit: type=1105 audit(1761873743.978:567): pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.994861 kernel: audit: type=1103 audit(1761873743.978:568): pid=5326 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:23.978000 audit[5326]: CRED_ACQ pid=5326 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:24.074783 sshd[5323]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:24.075000 audit[5323]: USER_END pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:24.076788 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:38308.service: Deactivated successfully. Oct 31 01:22:24.077925 systemd-logind[1300]: Session 24 logged out. Waiting for processes to exit. Oct 31 01:22:24.077926 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 01:22:24.078855 systemd-logind[1300]: Removed session 24. Oct 31 01:22:24.075000 audit[5323]: CRED_DISP pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:24.089040 kernel: audit: type=1106 audit(1761873744.075:569): pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:24.089089 kernel: audit: type=1104 audit(1761873744.075:570): pid=5323 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:24.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.140:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:24.310629 kubelet[2119]: E1031 01:22:24.310586 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:22:25.312402 kubelet[2119]: E1031 01:22:25.312328 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-666d989cd4-28np7" podUID="c73f2cd7-5e10-439e-b9c8-8be3e29282cb" Oct 31 01:22:26.312019 env[1318]: time="2025-10-31T01:22:26.311962410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:22:26.635678 env[1318]: time="2025-10-31T01:22:26.635517933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:22:26.642010 env[1318]: time="2025-10-31T01:22:26.641948703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:22:26.642283 kubelet[2119]: E1031 01:22:26.642222 2119 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:22:26.642283 kubelet[2119]: E1031 01:22:26.642278 2119 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:22:26.642626 kubelet[2119]: E1031 01:22:26.642437 2119 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xllgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df7bf54df-2pcg2_calico-apiserver(06e5831d-75dc-4025-8be9-9be7b711ddfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:22:26.643639 kubelet[2119]: E1031 01:22:26.643590 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df7bf54df-2pcg2" podUID="06e5831d-75dc-4025-8be9-9be7b711ddfe" Oct 31 01:22:29.078330 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:43756.service. Oct 31 01:22:29.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:43756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:29.085856 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:22:29.085919 kernel: audit: type=1130 audit(1761873749.078:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:43756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:29.117000 audit[5339]: USER_ACCT pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.118434 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 43756 ssh2: RSA SHA256:lsD8JPjicOMb4IdtMa09c7waD0RtiIVSezpSCib1Gvc Oct 31 01:22:29.124000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.125214 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:22:29.125414 kernel: audit: type=1101 audit(1761873749.117:573): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.125447 kernel: audit: type=1103 audit(1761873749.124:574): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.128574 systemd-logind[1300]: New session 25 of user core. Oct 31 01:22:29.129432 systemd[1]: Started session-25.scope. Oct 31 01:22:29.136845 kernel: audit: type=1006 audit(1761873749.124:575): pid=5339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Oct 31 01:22:29.124000 audit[5339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff09913fc0 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:29.144736 kernel: audit: type=1300 audit(1761873749.124:575): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff09913fc0 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:22:29.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:29.147360 kernel: audit: type=1327 audit(1761873749.124:575): proctitle=737368643A20636F7265205B707269765D Oct 31 01:22:29.147405 kernel: audit: type=1105 audit(1761873749.135:576): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.135000 audit[5339]: USER_START pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.136000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.162355 kernel: audit: type=1103 audit(1761873749.136:577): pid=5342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.268084 sshd[5339]: pam_unix(sshd:session): session closed for user core Oct 31 01:22:29.268000 audit[5339]: USER_END pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.270306 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:43756.service: Deactivated successfully. Oct 31 01:22:29.271251 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 01:22:29.272488 systemd-logind[1300]: Session 25 logged out. Waiting for processes to exit. Oct 31 01:22:29.273282 systemd-logind[1300]: Removed session 25. Oct 31 01:22:29.268000 audit[5339]: CRED_DISP pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.282197 kernel: audit: type=1106 audit(1761873749.268:578): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.282245 kernel: audit: type=1104 audit(1761873749.268:579): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:22:29.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.140:22-10.0.0.1:43756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:22:30.310996 kubelet[2119]: E1031 01:22:30.310956 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"